The Brain & AI

Is Artificial Intelligence today where brain research was 100 years ago?

Babies are not born with randomly connected brains and turned on to learn.  And yet, 100 years ago, neurobiologists were not so sure.  In fact, most of them rather liked the idea, because they disliked the alternative: the development of intelligent brains without learning – as if embryo development could determine who you are.  100 years ago, neurobiologists had only recently discovered the existence of vast nerve fiber networks in the brain.  But where could the information come from that rendered the networks ‘intelligent’?  There could be but one answer: learning.  It seemed much easier to envision the development of a randomly connected network that becomes smart through learning than a well-connected network that had grown smart during development.  The underlying debate about the genetic basis of intelligence has lost none of its vigor to this day. 

Curiously, today’s AI researchers are in agreement with those early pioneers of neurobiology: even the most advanced deep neural networks are based on the principle of an initially randomly connected network that is turned on to learn.  Meanwhile, 100 years of research on the development and genetic encoding of biological neural networks have left a mark.  Enormous (and enormously expensive) research efforts are underway to map ‘connectomes’ in brains to provide maps of genetically encoded connectivity.  If connectomes were simply randomly connected, these efforts would be done in a day.  But they are not.  Neural circuits in biological brains are a fundamental basis to understand brain function, including the ability to learn.

The effort to map connectomes is reminiscent of similar efforts 20 years ago to map genomes.  Back then, some people asked: Well, once we know the entire genome, aren’t we done?  But as it became clear very quickly, we were at a beginning then, not the end.  The genome does not contain information that describes neural networks, the genome only contains information to grow neural network.  Scientists are grappling with this difference to this day.

Why is growth so important?  Genes allow to grow brains, but we cannot read the connectome in the genome.  In fact, there is much less information to read in the genome than there is in a connectome.  It is easy to fully describe a genome compared to the attempt to fully describe a brain.  Where is the missing information coming from?  Growth is an energy- and time-dependent process.  More energy and time during brain development allow for more information in brain wiring.  The Monarch butterfly has a tiny brain, but it allows the butterfly to navigate in space and time by computing light, gravity, wind, landscapes and the electromagnetic field of the earth.  Somehow, this enables the tiny brain to compute a journey of thousands of miles to a very small region in some far away mountains that the butterfly never knew, because its last ancestor to fly this route was its great-great-grandparent.  One could say: the route is in the genes.  But we can’t read it there.  The genes can only guide the self-assembly of the butterfly’s brain.  By the end of its development this brain knows how to fly and find those mountains, before learning anything. And the tiny brain achieves much more, of course: to recognize danger and adjust behavior accordingly, to find food and mate, … and think like, well, a butterfly.

The history of AI is a history of trying to avoid biological detail in trying to create something that so far only exists in biology. For decades this history was characterized by trying to avoid neural networks.  Today, neural networks have become synonymous with AI.  But even the most advanced AI systems are still based on neural networks that are designed, not grown, with random connection weights prior to learning.  The Monarch butterfly should be surprised that we consider a face-recognition AI as really smart, but a little butterfly as apparently rather stupid.

How brains self-assemble based on genes and learning is one of the most exciting riddles in natural sciences.  After all, what comes out of it can think about the riddle of itself and try to build an artificial version of itself: our brain.  The question AI researchers are facing since more than 70 years is this: what simplifying shortcuts can we take?  For example, artificial neural networks get away with the shortcut to simulate synaptic connection strengths, without a simulation of the millions of molecules that create synaptic properties in biology.  It works, but it has consequences.  The complete omission of genes and growth should at least leave us wondering: what kind of intelligence would they have been needed for?  Certainly not for the today’s AI applications.  But how about the intelligence of a butterfly, or that of a teenager?  Every shortcut has consequences for the intelligence we get.  And for some things in life, there is just no shortcut.