Nicholas FitzRoy-Dale's personal journal. I also write a programming blog and a tumble log. Contact me at wzdd.blog@lardcave.net or subscribe to my RSS feed.

Jul 12, 2014
Don't read Business Insider for AI insights
Dylan Love recently wrote an article for Business Insider titled Why We Can't Yet Build True Artificial Intelligence, Explained In One Sentence.

The sentence, by Jaron Lanier, explains nothing of the sort, but I don’t think that's Jaron Lanier’s fault. Business Insider took it from this two-sentence quote published in an opinion column in the New York Times:

We’re still pretending that we’re inventing a brain when all we’ve come up with is a giant mash-up of real brains. We don’t yet understand how brains work, so we can’t build one.

I’m not 100% sure what Jaron meant in-context, but there are plenty of reasonable interpretations. On the other hand, I think it’s pretty clear that, taken out of context, “we don’t yet understand how brains work, so we can’t build one” doesn’t argue against AI. The rest of the Business Insider article makes it obvious that, in its author's view, “true AI" is about cloning brains; we don’t understand brains so we can’t clone them; therefore AI can’t achieve its goal. All three of these claims are highly dubious.

AI isn’t about cloning brains. Well, most AI efforts aren’t. The recent trend in AI — deep learning — is nothing at all like a biological brain in terms of actual mechanisms, and earlier trends, such as expert systems and symbolic manipulators, were even less like them. When human brains are being “cloned” (simulated using computers to whatever required degree of biological accuracy) the reasoning is usually in terms of research into brain pathologies. The EU Human Brain Project, for example, has this goal.

But even if inventing a brain were AI’s main goal, one doesn't have to understand how something works in order to build it — one only has to understand how to build it. For example, we understand enough about biological neurons to be able to simulate them pretty well. One thing that we could do, then, is put together an absolutely massive and highly-accurate simulation of an arrangement of biological neurons that resembles a human brain. The challenges here lie in getting a sufficiently-accurate map of a human brain in order to copy it, and in getting enough computing power to feasibly perform the simulation. Nobody has done this yet. If someone were do to it, they might have a claim that they have produced AI, but they wouldn’t reasonably be able to claim that they understood how the brain worked.

So what did Jaron Lanier mean? Perhaps something more like this: for strong AI, aka “Artificial general intelligence”, we need to understand enough about how intelligence works in order to produce something which behaves intelligently. Plenty of people have theories about it, but nobody has yet produced anything which would demonstrate that their theories are correct. Therefore it’s safe to conclude that these theories are at best incomplete. 

Not quite as punchy a quote, but true.