Book

Latest

The Brain & AI

Mortal Computing, Geoffrey Hinton’s Forward-Forward Algorithm and The Self-Assembling Brain

Geoffrey Hinton is both a founding father and visionary critic of current approaches to AI.  He has repeatedly been thoughtful about the lessons the field may learn from biology.  He has also clearly spelled out fundamental limits to the currently most successful approaches of software-based learning methods that do not require prior information in the hardware.  In his new preprint ‘The Forward-Forward Algorithm: Some Preliminary Investigations’ he included a final chapter on ‘mortal computation’ with closing words that directly pinpoint these limitations by highlighting a prominent difference between computers running learning software and biological brains: energy consumption.

“If you want your trillion parameter neural net to only consume a few watts, mortal computation may be the only option. Its feasibility rests on finding a learning procedure that can run efficiently in hardware whose precise details are unknown, and the Forward-Forward algorithm is a promising candidate, though it remains to be seen how well it scales to large neural networks.”Geoffrey Hinton, The Forward-Forward Algorithm: Some Preliminary Investigations

Yet, the real concern here may be about information, which has an important, albeit non-intuitive, equivalency to energy (think information entropy, Shannon’s ‘missing information’). Energy is what you need to get information into the network. The key difference between biological and artificial neural networks is this: in artificial neural networks very little energy is required for the initial (random) wiring, and all the energy AI developers talk about is the energy required for learning (training); by contrast, biological neural network contain A LOT of information already prior to any learning – and this high information state prior to learning require a lot of energy both during the development of an individual network, but also the evolutionary programming of the developmental process over aeons. In this respect, AI is where neuroscience was 100 years ago, when some scientists thought everything in your brain had to be learned. That’s the story of The Self-Assembling Brain.

An excellent article on the forward-forward algorithm, mortal computing and The Self-Assembling Brain is has been published on Dec. 19 by Ben Dickson in BD TechTalks. Here is an excerpt:

Peter Robin Hiesinger, professor of neurobiology and author of The Self-Assembling Brain, points out that the key concern is information. In biology, there is a lot of information contained in biological neural networks prior to learning, he said.

“It is in the connectivity. It is in the molecular composition of each individual synaptic connection, which is so much more than just a ‘synaptic weight’. And it all got there—in biology anyways—through genetically encoded development prior to learning,” Hiesinger said.

Genetically encoded development solves two problems raised by Hinton, according to Hiesinger. First, it determines how to get the “trillion parameters” into the network. And second, the evolutionary programming of the genome underlying development is the “learning procedure” that efficiently “runs” in hardware whose precise details are unknown. 

“In fact, the phrasing of a ‘procedure that can run efficiently in hardware whose precise details are unknown’ falls back to exactly what Hinton questions: a separation of the procedure (software) from the hardware,” Hiesinger said. “In biology, the precise details of the hardware are the outcome of a long information-encoding process, evolutionarily programmed and unfolded through a growth process.”

The efficiency of biological learning lies in this gradual information-encoding process: There is no end to the development of the hardware and to the beginning of the running of learning procedures. Instead, information encoding in the network, including subcellularly and molecularly, occurs continuously first without, later with neuronal activity.  

“Learning at the neuronal and synaptic level already occurs as a purely genetically encoded process, prior to the processing of environmental information,” Hiesinger said. “There is no ‘on’ switch, only gradually increasing information encoding that results in a network ‘whose precise details are unknown’ only from the perspective of learning that ignores the information encoded prior to feeding data into the network.”  

The precise details of a given state of the biological hardware are so difficult to know because it would take so much information to describe it all.  And there is no agreement on what is relevant.

“In the current AI, nothing matters but the synaptic weight. But many aspects of synaptic function cannot be simulated without the properties, dynamics and function in time, myriads of individual molecules that make the synapse react in a way that the up- or downregulation of a synaptic weight through gradient descent does not,” Hiesinger said. “Hinton is in search of that missing information.”