
Neuroscience Prof. Anil Seth argues that conscious human intelligence is tightly coupled to our living, biological substrate, not replicable or simulatable in silicon. Here are some of my reactions to his thought-provoking piece:
If human intelligence and consciousness is substrate dependent, as asserted, even down to individual neurons being irreplaceable by silicon substrates, then some precise and strong claims emerge: uploading human consciousness to a new substrate (as referenced in the article) would not be possible, and the BCI companies should not be able to augment the core of human intelligence. This would have profound implications on the possibility of “humanity” going along for the ride of exponential progress in AI.
(As an aside, it’s far more likely that our biology is left behind, and building an AI that exceeds human intelligence will likely happen before we fully understand the brains we have. It’s easier to build a new one than reverse engineer the complex product of an iterative algorithm like evolution, cortical pruning, or neural net development. The locus of learning shifts to the process, not the product of development.)
Let me lend further evidence to the article’s claim that neural complexity vastly exceeds the neural net abstractions of current AI, and that human intelligence may be substrate dependent. At the high level of the connectome, the average adult has 1000 input synapses to each neuron, and a newborn baby has 10,000. Silicon chips do not have enough metal layers to implement this level of fan-in per gate. And these connections are dynamic; 90% are pruned in childhood development, and neurons that fire together wire together in a dynamic and ongoing remapping over time. Pure, detailed biomimicry of the brain in mainstream CMOS silicon may be impossible, for now and the foreseeable future. Dynamic interconnect is the issue, and it may require a fully 3D, fluid, low power substrate. Like the brain. And it might take some of the special chemical properties of carbon to capture the richness (I wondered about this in 2005)
On the other end of the spectrum, the complexity of the neuron vastly exceeds a simple sigmoid voting circuit or digital gate abstraction. Ion channels activate like a bucket brigade down each synapse. HIV-like particles and endogenous cannabinoids may play a role in nearest neighbor interactions outside the synapse. The extra-cellular matrix, like the potting soil outside the neuron, relaxes in a long series of critical periods of childhood development, and under the influence of psychedelics, changing the neuroplasticity for interconnect changes. And the neuron types may be vastly more varied that the observable phenotypic buckets (pyramidal, mirror neurons, etc.). MIT’s Ed Boyden believes that the gene expression of each neuron is unique — literally billions of different neuron types.
But, even if human intelligence and consciousness are fully substrate dependent, it does not follow that human-level intelligence is impossible with a different substrate. We may have only one existence proof from biological evolution, but that does not imply exclusivity in the space of possibilities. The substrate of our brains is not very different from less intelligent animals; our unique advancement came from layering on more self-similar cortex — not a better substrate but more of it.
There is much of our substrate that is unique from its evolutionary origins and as a way to make the most of it – it’s quite a miracle that meat can think at all… and do math and compute, even if we choose not to. We can imagine a certain percentage of our substrate is for basic metabolic support and garbage collection and not fundamentally essential for the thinking at hand, when abstracted at the right level. It’s like the power supply implementation of a computer not being essential to the computation architecture itself. Some portion of the genetic code in each neuron is a vestigial passenger from viral transposons of the past.
It’s safe to say that some fraction of our substrate is critical to the architecture of intelligence, and the critical exercise of biomimicry is to figure out the right level of abstraction, the right level of detail, if we wish to follow a similar path in a different substrate.
The critique of current AI approaches as falling short with an over-simplistic simplification may be correct, but not insurmountable. Or the shortcomings could be a vestige of the architecture and process of training the LLMs of today. A number of the AI advances of the past decade were focused on Reinforcement Learning. It was Deep Mind’s initial focus. There has been a revival of late, with some like Yann LeCun arguing that LLMs will never get us there… but RL will. We have believed for many years that the future of AI compute will be analog in-memory compute, as implemented in Mythic chips, and the brain. Some believe it will require an embodied intelligence interacting with the world of physical AI. Jeff Hawkins is working on a memory prediction architecture arguing that the brain is not a computer at all (and perhaps the qualia of consciousness is the merely the retrospective sensemaking of predictions occurring continuously at all layers of the cortex). Perhaps we will need a coincidence detector for asynchronous circuits to mimic the fire-together/wire-together paradigm (perhaps with reversible-computing resonators). Perhaps a neurosymbolic hybrid will bear fruit in mimicking different brain regions distinctly. Perhaps we will need a series of critical periods, like human children, with a path dependence on the sequencing of neural net training. There are many possibilities and exciting work to come, a Cambrian explosion of sorts, exploring different abstractions of architecture and processes of training.
While we humans want to feel special, unique, and central to the future, it does not make it so. One day, we will have a more advanced non-human intelligence that is conscious. That will happen quite simply by considering the next million years of continued biological evolution, with a selection function that rewards intelligence. To argue otherwise is to argue that homo sapiens are somehow the endpoint of evolution. Evolution does not suddenly end, even if we wish it to. The biological substrate of our successor species will likely be similar to ours, as the primary vector of evolutionary progress operates most rapidly at the highest level of abstraction. The open question is whether non-biological evolutionary algorithms will usher in non-biological intelligence that is superhuman and conscious in a handful of years if we are pursuing the right level of abstraction for conscious intelligence or maybe decades if we need to explore radically different analogs to our analog meat minds.
— Anil Seth is the director of the Centre for Consciousness Science at the University of Sussex. Here is his article in Noema
Leave a Reply