Nicholas FitzRoy-Dale's personal journal. I also write a programming blog and a tumble log. Contact me at wzdd.blog@lardcave.net or subscribe to my RSS feed.

Apr 5, 2021
"If the human brain were so simple that we could understand it, we would be so simple that we couldn’t" doesn't make sense to me

This quote is making the rounds again. As an off-hand witticism, which was presumably the intention, it's great. But I don't think it was supposed to be taken seriously.

To dig into why not, let's start by taking "human brain" literally and assume that everyone agrees that there is some kind of brain which can understand (leaving aside what it means to understand something for the moment). The brain of the fruit fly Drosophila melanogaster might be a good candidate for something we could eventually understand, as its brain has a relatively small number of neurons -- approximately 100 000. It's also convincingly a brain, exhibiting a large variety of behaviours and having some parts correlated with the brains of larger animals, unlike, for example, the "brain" of the nematode worm Caenorhabditis elegans, which seems more like a simple junction between sensory and motor neurons. In other words, imagine a graph, with D. melanogaster at one end, and H. sapiens at the other, and at some point between the two there's a line which defines the point at which we stop being able to understand the brain.

So let's find that line. Take this brain that we understand and add a small number of additional neurons to it -- say a couple of hundred, which is the number of neurons in the entire body of C. elegans. At this point we have to decide whether we still understand this new, augmented brain. If we decide that we do, then we add some more. Keep doing this and, if you take the title quote seriously, at some point you "fall off the cliff", reaching a point of literally incomprehensible complexity and thus taking the brain from something we do understand to something we don't.

For this "complexity cliff" idea to be plausible, we would have to hold the belief that adding a small number of neurons can (at least sometimes) dramatically increase the complexity of the brain. This seems to fly in the face of what we know about biological systems in general, and the brain specifically.

Brains aren't exactly modular, but there is a clear flow of information through them. Visual cortex, for example, is relatively well-understood: impulses originating in the retina pass through a number of processing stages, starting with fairly basic things like recognising lines of a particular orientation, and culminating in more complex things like recognising faces (or even specific faces) regardless of orientation, lighting, and so on. (There is also a fascinatingly large amount of information flowing backward from higher-level regions to lower-level regions, but this is some other post's fun digression). It seems unlikely, for example, that if we were to take our C. elegans' worth of neurons and put them in V1 that we would suddenly hit incomprehensible complexity.

All right, but the visual system is nicely self-contained, and even tiny-brained insects have one. What about consciousness? The mind's eye? Art? And so on. This seems to me to be the crux of the matter, and a little chauvinistic, essentially saying that of the things we understand the least, the things we prize the most are the ones which will not fall to our own understanding. It's the same sort of argument that was made about computer chess in the 70s and 80s -- that beautiful plays on the chess board required empathy, or an appreciation of what it is to be human. Nope, it just requires a bunch of heuristics and computational power. (People were saying the same about Go right up until AlphaZero's historic match, though I think one could make a reasonable argument that we don't fully understand AlphaZero. But that's beside the point: nobody is claiming that AlphaZero is empathetic, or that it has a keen understanding of the human condition.)

Humans don't learn or understand by taking everything in at the lowest level of detail -- that would be like trying to explain how a tap works by discussing the movement of every water molecule in the pipe. We learn by abstracting and creating a hierarchy. There's a lot of evidence that the brain is arranged in a way that's amenable to this hierarchical approach. A good example is cerebral cortex, which seems to consist of the same fundamental neural circuit repeated over and over. Indeed, the difference between the cerebral cortex of a human and of, say, a cat, isn't in the types and connectivity of the neurons -- they're very similar -- but is simply in the number of these circuits that are present.

It's comforting to imagine that there is something ineffable that makes us human -- something which will never fully be understood or explained. But to me it's far more impressive and awe-inspiring if we do turn out to be completely explicable -- if that, for no particularly good reason, fifteen to twenty billion neurons came together in a well-defined order and wrote a blog post about themselves.