I was a big reader as a kid. I read all the popular series of the day—Ramona, Babysitter’s Club. Once I’d read all these, I’d get older series from the library. My favorite was The Great Brain about a Catholic family with four sons living in a Mormon town in Utah in the 1890’s. One of the sons, Tom, was the smartest kid in town, dubbed “The Great Brain” by his younger brother who narrates the series.
The Great Brain was a terrific swindler, duping everyone into making bad deals or doing his bidding to advance some nefarious agenda. Then there’d be some problem facing the town that only The Great Brain could solve, such as figuring out what was poisoning the well. In these moments he’d rise to the occasion and put his super intelligence to work for the greater good. The Great Brain might be have been devious and self-interested, but he was a hero you could root for in the end.
I got to thinking about The Great Brain the other day after reading about the release of DeepSouth, a neuromorphic supercomputer and AI platform that went online in Sydney, Australia last month. Specifically, I was wishing they’d called it The Great Brain.
For the uninitiated, neuromorphic computing is a branch of computer engineering in which the computer’s hardware is modeled after the human brain and nervous system. DeepSouth is not the first neuromorphic supercomputer, but is a notable entry in a very small field competing with Intel’s Hala Point and Loihi 2.
DeepSouth’s arrival on the scene has been highly anticipated for the past year as the first neuromorphic computer to match the astonishing processing rate of the human brain with the ability of performing 228 trillion operations per second. This is projected to be incrementally faster than Intel’s machines, but more than 50 times faster than conventional supercomputers using traditional architecture.
Neuromorphic computing has been around since the 1980s with progress dependent on advances in neuroscience - even today we don’t know everything there is to know about the brain - and also innovations in materials technology, such a silicon, which allows us to manifest designs that mimic the brain’s architecture and processes.
Now, I could go on— it’s all very exciting. But before I do, I have a confession to make.
When I first read about DeepSouth, before learning everything I just shared with you about neuromorphic computing, I’d been operating under the assumption that the computers I’d been using all my life had been modeled on the brain, at least in part. If not, what was all this talk about artificial neural networks going back as far as the 1960’s? So I asked Google, who knows a lot, and this is what I learned.
When we talk about artificial neural networks, we are talking about software. While software has incorporated neurologic processes for a long time, the hardware on which computer technology evolved over the last 80 years has an architecture that is distinctly different from the human brain. This architecture was designed by a computer scientist and mathematician named John von Neumann in 1945.
The von Neumann architecture is characterized by separate units for central processing, memory, data control and logic. These components communicate by a system of buses, akin to a city’s road network, allowing data to move smoothly from one part to another. It is ideal for running stored programs where instructions are processed in a set sequence. It is a structure of great simplicity, and yet we have built the modern world on its back.
By way of structural comparison, if the von Neumann architecture was a city, it would be Zurich. If the brain (and brain computers) were a city, they’d be Calcutta.
This difference is what makes neuromorphic computing increasingly relevant today. We live in a time that is characterized by increasing complexity, with increased and characteristically different demands.
The brain’s evolution towards complexity is how it achieves its efficiency and learning ability; qualities that are critical in the development of AI. Neuromorphic computing seeks to replicate these characteristics in the following ways:
Parallel processing: neuromorphic computers, like the brain, have the capacity for parallel processing, where nodes (neurons) and synapses can potentially be operating simultaneously.
Collocated processing and memory: there is no notion of a separation of processing and memory.
Plasticity: neuromorphic computing chips allow networks of artificial neurons to learn on the fly and adapt to new information in a way that approximates the brain’s plasticity.
Event-driven computation: neuromorphic computers leverage event-driven computation (meaning, computing only when data are available) and temporally sparse activity to allow for extremely efficient computation.
Stochasticity: neuromorphic computers can include a notion of randomness, such as in the firing of neurons, to allow for noise.
As we’ve discussed before in the AI-Curious Newsletter, the promises held out that AI is going to fix everything from climate change to equitable education has made many skeptical. Skepticism only grows when early releases of AI seem to evidence the tremendous gap between where the technology is now and those moonshot aspirations.
Those in the neuromorphic development community believe that neuromorphic technology is going to be the critical factor in bridging that gap. The assertion is that advances in machine learning and AI, while transformational, are limited by the inherent shortcomings of traditional computer architecture. A paradigm shift in modern computing, and the realization of our biggest goals, will occur only with a radical move towards broader adoption of neuromorphic technology. This is no easy task given the universality of traditional computing structures, and yet we are moving in this direction, nonetheless.
Funding has surged in recent years driven by the need for hardware to support AI innovation. In the last few months alone the Pentagon began awarding grants from a 78 Million dollar research fund created to invest in companies developing neuromorphic AI computer chips that can work in remote battlefields. Meanwhile, Goldman Sachs forecasts that investment in artificial intelligence will surpass USD 200 billion by 2025.
And here we arrive at the part of the newsletter where I offer some ideas about how we might think about these developments. Here’s what I’ve got for you:
Even as we might increasingly feel that “it's AI’s world and we’re just living in it” we should consider that we are also entering the Brain Age, with all the human brain is going to teach us. Decades of research into neuroscience has brought us to the point where we have the technology to push through a new ceiling of knowledge about how the brain works, creating the conditions for dramatic advancements in neuroscience that will allow us to effectively treat neurological problems such as dementia, Parkinson’s Disease and blindness. And this is just the tip of the iceberg.
Even as we rightly proceed with caution in building a future shaped by AI, we can be hopeful that the catalyzing effect of neuromorphic computing will elevate common uses of AI beyond productivity and corporate interests into the realm of the truly heroic—not unlike my old friend, The Great Brain, rising beyond baser instincts when the chips are down to achieve something universally good.
Loved this informative post.