[Doctorow-L] Past Performance is Not Indicative of Future Results

Cory Doctorow doctorow at craphound.com
Mon Nov 2 22:04:19 EST 2020


My latest Locus Magazine column is "Past Performance is Not Indicative
of Future Results," an essay about the limits of machine learning and
the reason that statistical inference will not lead to consciousness.

https://locusmag.com/2020/11/cory-doctorow-past-performance-is-not-indicative-of-future-results/

At its core, machine-learning is "theory free correlation-detection" -
that is, it takes training data and finds things that appear together in
it. Two things labeled an eye and one thing labeled a nose and one thing
labeled a mouth all add up to a face.

But the classifier doesn't know what a noses, eyes, or mouths are. It
doesn't know what a face is. Your doorbell camera doesn't know that the
face-like thing in the melting snow on your walk CAN'T be a face, so it
repeatedly warns you about a stranger on your doorstep.

That theory-free-ness, combined with the abstruse mathematics of
statistics, is what gets "AI" into so much trouble. Give machine
learning classifiers of all the successful people at your company and it
will tell you to hire people like them.

But if you've been missing great people due to bias, that is terrible
advice - and it's got the veneer of empiricism. Remember when AOC got
tons of shit from far-right assholes for calling an algorithm racist?
How can math be racist?

https://www.livescience.com/64621-how-algorithms-can-be-racist.html

Theory-free isn't good enough. To understand what's happening in a
complex situation, you haven't to be an anthropologist, not just a
statistician. You need what Clifford Geertz called "thick description" -
the qualitative accounts of the quantitative phenomenon.

Quantitative researchers are infamous for screwing this up. The
qualitative elements are hard to do math on, so they incinerate them and
leave behind a quantitative residue and do math on that, assuming it
will be sufficient. It's (usually) not.

That's why exposure notification isn't contact tracing: knowing that two
Bluetooth radios were close to each other for 15 minutes doesn't tell
you if they were swapping spit or stuck in adjacent, sealed automobiles
in a traffic jam.

Using theory-free inference to understand the world doesn't and can't
lead to comprehension. "Theory-free" is the opposite of comprehension.
We may not have a universal, agreed-upon definition of "artificial
intelligence" but "understanding" is definitely a part of it.

Machine-learning classifiers have done amazing things to automate away a
ton of drudgery, just as smiths did amazing things to shape metal. But
smiths couldn't make reliable internal combustion engines. Incremental
improvements in metal-beating don't evolve into machining.

Reliably turning out the precision components that produced engines
needed casting and machining. Getting there required a shift in
approaches, not improvement in the existing approach.

Theory-free statistical inference does a lot of good stuff - and
produces a lot of bad outcomes - but the idea that if we do enough of it
we'll get artificial intelligence is fundamentally wrong.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: OpenPGP digital signature
URL: <http://mail.flarn.com/pipermail/doctorow-l/attachments/20201102/f51b0dc3/attachment.sig>


More information about the Doctorow-L mailing list