The Staggering Inefficiency of AI v the Human Brain


The human brain is the most energy efficient computing system ever studied. It runs an entire conscious lifeform (perception, memory, language, emotion, movement) on between 12 and 20 watts of power. Roughly the same draw as charging a smartphone.
Switzerland's Blue Brain Project estimated that simulating the human brain's full processing in real time would require approximately 2.7 billion watts. This is equivalent to the combined output of three nuclear power stations - enough electricity to supply a large city.
That gap matters. This article looks at what the numbers actually mean, why the brain is built so differently from a chip, what researchers are currently building in response, and where the comparison between biological and artificial intelligence starts to break down.
The Energy Comparison in Context
The human brain contains approximately 86 billion neurons and handles perception, memory, motor control, emotional regulation, social reasoning, and creative thought on between 12 and 20 watts.[1] About the same as a bedside lamp. A typical laptop processor uses around 150 watts. The fastest supercomputer currently running draws over 21 million.[2]
At the task level, generating a single text response from a large language model is estimated to require over 6,000 joules. The brain uses roughly 20 joules per second to sustain all of its cognitive and biological functions at once.[3] The two aren't directly comparable, as the brain isn't producing text responses, but the order of magnitude is telling.
| System | Continuous power draw | Energy per cognitive task |
|---|---|---|
| Human brain | 12–20 W | ~20 J/second (all tasks, continuously) |
| Laptop processor | ~150 W | n/a |
| Large language model (single response) | Variable (data centre) | >6,000 J per query |
| Frontier supercomputer | 21,000,000 W | n/a |
| Hypothetical real-time brain simulation (Blue Brain Project estimate) | ~2,700,000,000 W | Estimated requirement to match brain in real time |
Energy comparisons, 2025. Sources: Neurozone, TechXplore, PMC / Frontiers in Neuroscience.
Zooming out to the infrastructure level, data centres and AI consumed around 460 terawatt-hours globally in 2022. The International Energy Agency projects that could more than double by 2026, approaching 1,000 TWh, roughly comparable to Japan's total annual energy use.[4] AI queries consume considerably more electricity than conventional web searches, and when you multiply that across billions of daily interactions, it adds up fast.[5]
Key finding
A peer-reviewed estimate in Frontiers in Neuroscience puts the total relative energy efficiency of the human brain versus silicon semiconductor processors at approximately 2.7 × 10¹³. The figure accounts for per-operation efficiency and for the fact that current hardware takes around 30,000 times longer than real time to simulate biological neural activity. This makes any like-for-like comparison considerably harder than a simple watt comparison suggests.[6]
The Principles Behind the Brain's Efficiency
The brain's efficiency doesn't come from a single clever feature. It's the product of several interlocking architectural properties, each one substantially different from how conventional computing systems are designed.
Memory and processing occupy the same physical space
In a conventional computer, memory and processing are physically separate. Data moves constantly between storage, RAM, and the processor, burning energy at every step and creating the structural slowdown known as the von Neumann bottleneck.
The brain doesn't have this problem. Synapses both store information and participate in computation; there's no equivalent shuttle back and forth. As one researcher at the University at Buffalo put it: "It's not as if the left side of the brain holds all the memories and the right is where all learning happens."[3]
Sparse activation: most neurons are quiet at any given moment
A processor running an AI model keeps vast numbers of transistors switching continuously, regardless of whether those operations are immediately needed. Neurons don't work that way. At any given moment, only a small fraction are actively firing; the rest are quiet and use very little energy. The brain's power draw scales with what it's actually doing, not with its theoretical maximum capacity.
Event-driven, analogue signalling
Digital transistors switch between on and off billions of times per second, consuming power with every transition. Neurons fire electrochemical spikes called action potentials, using energy only at the moment of transmission and sitting at rest otherwise. Power consumption tracks actual information flow rather than running flat-out continuously.[7]
Five hundred million years of evolutionary refinement
Biological neural architecture has been under selection pressure for roughly half a billion years. Configurations that wasted energy were progressively weeded out. The human cortex is the accumulated result - enormously complex compared to early nerve nets, yet still running within the same basic metabolic budget.[1] Silicon computing has had about seventy.
Neuromorphic Computing: Building Hardware Inspired by the Brain
The dominant research response to AI's energy problem is neuromorphic computing: hardware that physically mimics aspects of the brain's architecture, rather than running brain-like software on fundamentally brain-unlike hardware. The field dates to the 1980s but has gained serious momentum as energy costs have moved from academic concern to practical constraint.[3]
The starting point is that the two systems are not as structurally alien as they look. Conventional computers encode information with transistors that either conduct electricity or block it. Neurons do something structurally analogous, they fire or they don't. Neuromorphic research tries to build hardware that goes beyond that simple binary switching, toward artificial neurons and synapses that signal more like biological ones, with memory and processing in the same place rather than separated.
"There's nothing in the world that's as efficient as our brain. It's evolved to maximise the storage and processing of information and minimise energy usage."
The hard part is the materials. To reproduce event-driven signalling in silicon, you need substances whose electrical conductivity can be switched with enough precision to simulate the synchronised oscillations seen in brain imaging, and that can hold their state without continuous power input. Two material classes are currently at the frontier: phase-change materials (PCM), which flip between conductive and resistive states under controlled electrical pulses, and spin-based materials that exploit quantum magnetic properties to store and process information in the same physical device.
Recent Research Directions
Spin-memristors (TDK / CEA, 2024–2025)
In collaboration with the French Alternative Energies and Atomic Energy Commission, TDK has demonstrated a working spin-memristor - a device that uses quantum magnetic properties to function simultaneously as memory and processor, much as a biological synapse does. The company's stated target is chips that cut power consumption to less than 1/100th of current AI processing requirements, a reduction that conventional semiconductor miniaturisation simply cannot reach.[4]
Phase-change neuromorphic chips (University at Buffalo, 2025)
Physicist Sambandamurthy Ganapathy leads a National Science Foundation-funded team working with phase-change materials to build artificial neurons and synapses that reproduce the rhythmic electrical oscillations visible in brain imaging. Energy efficiency is part of the goal, but not all of it. Chips built on these principles may also process information more adaptively than conventional architectures, potentially handling tasks with limited training data more effectively.[3]
Super-Turing AI and Hebbian learning (Texas A&M, 2025)
Texas A&M is coming at the problem from a different angle. Rather than redesigning hardware, their approach integrates learning and memory at the algorithmic level, targeting the training-inference split that accounts for much of conventional AI's computational cost.
The framework is built on Hebbian learning, the neuroscience principle often summarised as "cells that fire together, wire together", combined with spike-timing-dependent plasticity, which adjusts connection strength based on precise timing between neurons. Standard AI training uses backpropagation, a global error signal passed backwards through the entire network, with no real biological equivalent. This becomes increasingly expensive as models grow.[8]
The team calls this "Super-Turing AI": learning and memory integrated into the same hardware operation, removing the need to shuttle large volumes of data between components during training. In a practical test, a drone navigated a complex environment without any prior training, adapting in real time. It was faster and less energy-intensive than a conventional AI approach to the same task.[8]
Researcher perspective
"Modern AI like ChatGPT is awesome, but it's too expensive. We're going to make sustainable AI." Dr Peng Li, Texas A&M, 2025. Where most neuromorphic programmes focus on hardware alone, the Super-Turing approach targets both hardware and algorithmic efficiency together.[8]
| Approach | Institution / Company | Brain feature being replicated | Reported efficiency target |
|---|---|---|---|
| Spin-memristor chips | TDK / CEA | Collocated memory and processing (synaptic) | Target: <1/100 of current AI power draw |
| Phase-change neuromorphic chips | University at Buffalo | Event-driven spiking; synchronised oscillations | Significant reduction; hardware in development |
| Super-Turing AI (Hebbian) | Texas A&M University | Integrated learning and memory; on-the-fly adaptation | Demonstrated improvement over backpropagation in drone navigation test |
Selected neuromorphic research programmes, 2024–2025. Sources as cited.
Where AI Architecture Has Converged on Brain-Like Solutions
Something curious has happened in AI development. Many of the architectural solutions engineers landed on were not borrowed from neuroscience, they emerged from solving engineering problems independently. Yet, they share structural similarities with features of the biological brain. Researchers across the field have noticed, and it raises a genuinely interesting question: whether some architectural features are near-universal responses to the challenge of building efficient intelligence within physical constraints.
| Human brain structure and function | Approximate AI equivalent (2025) |
|---|---|
| Anterior Cingulate Cortex (conflict detection) | Critic node that flags contradictions before output |
| Prefrontal Cortex (goal maintenance) | Chain-of-thought reasoning and system prompts |
| Dopamine reward signal | Reinforcement learning reward model |
| Neuroplasticity | Fine-tuning and LoRA weight adaptation |
| Sparse, parallel neural activation | Mixture-of-Experts (MoE) sparse activation |
| Hippocampal memory consolidation | Retrieval-augmented generation (RAG) |
Structural parallels between biological and artificial neural systems. These are functional analogies, not precise anatomical mappings.
None of these parallels should be pushed too hard. The biological mechanisms behind each function are far more complex than their engineering counterparts. But the directional similarity keeps showing up, which suggests the brain's architecture may be a useful long-term reference point for AI design even where the specific biology can't be replicated.
The Deeper Differences: What Hardware Improvements Do Not Directly Address
Architectural convergence is real. But some differences between biological and artificial intelligence run deeper than hardware, and redesigning chips won't touch them.
Continuous learning versus discrete training
Current AI systems are trained in a discrete, resource-heavy phase, and once that's done, their parameters are largely fixed. Using the model doesn't update what it knows in any sustained way. The brain has no equivalent separation. Synaptic connections adjust throughout life, consolidating during rest and sleep. It's simultaneously always learning and always operating.
Texas A&M's Super-Turing approach is one of the more serious attempts to close that gap in AI. But even Hebbian learning systems operate within an externally defined reward structure. Biological learning is shaped by embodied experience, emotional state, social context - signals with no current AI equivalent.
Embodiment and grounding in the physical world
The brain didn't evolve to process language. It evolved to keep a body alive: navigating space, managing energy, sensing threats, maintaining relationships. Language and abstract reasoning are relatively recent additions sitting on top of a system whose primary job is survival. The brain draws on the nervous system, the hormonal system, the immune system, and the senses continuously. Emotion isn't a module bolted on the side, it's woven into how decisions get made.
AI systems have none of that grounding. No hunger, no pain, no tiredness, no social stakes, no experience of time passing. Modern AI performs remarkably well on language and reasoning precisely because a lot of what we call intelligence turns out not to require biological embodiment. But that doesn't mean the embodiment is irrelevant to the kind of open-ended, flexible intelligence the brain actually produces.
General versus narrow capability
The brain is extraordinarily efficient at doing everything. AI is most efficient when doing one thing well. For translation, image classification, code generation, and pattern recognition across large datasets, AI systems are highly capable and increasingly economical. Ask them to continuously adapt to genuinely novel circumstances, or to generalise from a handful of examples the way a child does, and the comparison shifts considerably.
"Even the tasks performed in the construction of a footpath, involving spatial planning and the use of several tools to manipulate a variety of materials, require greater computational performance than any advanced AI system can match."
A peer-reviewed analysis in Frontiers in Neuroscience argues that the energy required for a hypothetical artificial general intelligence, one capable of matching collective human civilisation, would likely exceed the total power output available to industrialised nations under current semiconductor architectures.[6] Getting to broad general intelligence isn't a matter of running today's systems at greater scale.
The Efficiency Trajectory and Its Constraints
Progress on AI energy efficiency is genuine and well-evidenced. GPU efficiency has improved at roughly 1.28 times per year since 2010. Algorithmic improvements over the past decade have cut the compute needed to reach a given level of model performance by an estimated factor of 20,000.[9] Neuromorphic hardware is edging out of the lab. The energy profile of AI in 2035 could look very different from today's.
What efficiency projections tend to underplay is the rebound effect. When something gets cheaper, people use more of it. If neuromorphic chips substantially reduce the cost of AI inference, the likely outcome is wider deployment across more applications, which may partially offset the per-query savings. Worth keeping in mind when projecting net impact on the sector.
There's also a ceiling question. The most demanding AI systems, particularly those pushing toward general reasoning and adaptive behaviour, will probably stay more energy-intensive than narrow ones for some time, regardless of hardware improvements. The brain's efficiency isn't separable from its full architecture, its evolutionary history, and the fact that it lives inside a body. Copying individual features of that efficiency is valuable. Assuming it resolves the deeper differences would be a mistake.
Where This Leaves Us
The brain's efficiency is a product of half a billion years of selection pressure, tight biological constraints, and an architecture built around keeping a body alive rather than completing tasks. AI has been engineered over decades, under different constraints, toward different ends. The energy gap between the two is a reflection of that history, not just a set of engineering choices waiting to be fixed.
The research covered here - neuromorphic hardware, Hebbian learning, spin-memristors, event-driven spiking - represents a genuine shift in how the field is thinking about the problem. Engineers are no longer just asking how to run existing AI more cheaply. They're asking how to build systems whose underlying architecture more closely resembles the one evolution produced. Those are different questions, and the answers may have implications beyond energy efficiency.
In the end, the gap between a brain and an AI model isn't really about watts or joules. The brain is a system shaped to keep a conscious creature alive in a social world. Current AI is a tool designed to optimise particular functions within objectives set by someone else. Both of those things can be true while neuromorphic research makes real progress on closing the architectural distance between them.
References
- Phillips, T. and van der Walt, E. (2024). Energy Efficiency in Artificial and Biological Intelligence. Neurozone Blog.
- Schuman, C.D. et al. (2023). The energy challenges of artificial superintelligence. Frontiers in Neuroscience / PMC.
- University at Buffalo (2025). How can AI be more energy-efficient? UB researchers look to human brain for inspiration. UB News.
- TDK (2024). Cutting AI's Power Consumption Down to 1/100 with Neuromorphic Devices Inspired by the Human Brain. TDK Featured Stories.
- MKWritesHere (2025). Human Brains Beat AI by 225,000 Times in Energy Efficiency. Write A Catalyst / Medium.
- Schuman, C.D. et al. (2023). The energy challenges of artificial superintelligence. Frontiers in Neuroscience. PMC10629395.
- ISAT Academy (2025). Smarter Per Watt: Human Mind vs AI Energy Efficiency.
- Texas A&M University (2025). Artificial intelligence that uses less energy by mimicking the human brain. Texas A&M Stories.
- Bilancioni, M. (2024). Energy Efficiency: AI vs. the Human Brain. Medium.
