Ask anyone who has experienced a flow state to describe it and you'll hear the same things: a surgeon mid-operation, a climber on a difficult route, a jazz musician deep into improvisation. Time distorts. The inner voice goes quiet. There is just the task, immediate and total. And yet these moments are consistently described as among the most vivid and meaningful of a person's life. Csikszentmihalyi, who first mapped this territory systematically, called it "the state in which people are so involved in an activity that nothing else seems to matter."[1]

That looks like a paradox. If consciousness means inner awareness and self-reflection (the position implicit in Chalmers' framing of the "hard problem," which centres on the felt, first-person quality of experience[2]), then flow states should feel less conscious. Instead they feel more alive than almost anything else. So which is it: is the inner voice the signature of consciousness, or just what consciousness produces when it has spare capacity?

The question matters well beyond resolving an interesting puzzle. If the inner narrative isn't consciousness itself but a byproduct of idle processing bandwidth, then we may have been measuring the wrong thing entirely. We may also have a better, language-independent way of detecting consciousness in entities that can't report their inner experience at all: other animals, infants, patients with disorders of consciousness, and perhaps eventually artificial systems.

What if the inner voice isn't consciousness itself, but simply what consciousness does when it has nothing urgent to process?

Evolutionary Origin

Why Consciousness Evolved in the First Place

Before asking what consciousness is, it's worth asking why it exists at all. Evolution doesn't produce expensive machinery without a reason, and conscious processing is metabolically costly. So what problem was it solving?

The most parsimonious answer is this: instinct and reflex were not enough. Early nervous systems handled the world through hardwired responses. Touch something hot, withdraw. Detect a predator's shadow, freeze. These automatic systems are fast, efficient, and require no deliberation. For a great deal of animal life, they are sufficient. But as environments became more complex and unpredictable, a new problem emerged. The volume and variety of incoming sensory information began to exceed what purely automatic processing could resolve in real time. When multiple threats appear simultaneously, when a novel situation has no stored reflex response, when competing instincts pull in different directions, the hardwired system has no mechanism for adjudication. It stalls, or produces a suboptimal default.

Consciousness, on this account, may have evolved precisely as the overflow handler for that problem. It is the brain's mechanism for making decisions within the available processing budget when automatic systems reach their ceiling. Rather than requiring a perfectly matched reflex for every situation (an impossible combinatorial task), a conscious system can integrate signals flexibly, weigh competing inputs, and generate a response that is good enough for the moment. Not optimal, because optimal is beyond the system's capacity. But better than nothing, and better than a frozen reflex cascade.

The evolutionary logic

Automatic systems handle routine load efficiently. Consciousness evolved to handle the residual: situations where sensory and instinctive inputs exceed the brain's capacity for automatic resolution. It is not a luxury faculty. It is an adaptive triage mechanism.

This framing could dissolve a long-standing puzzle in evolutionary biology: why would natural selection favour something as energetically expensive as conscious deliberation? The answer might be that it was selected precisely because it outperformed reflex alone in high-complexity environments. The organism that could improvise a response to a genuinely novel threat survived where the purely reflexive one did not. Over deep evolutionary time, conscious capacity expanded because the environments that tested it kept getting more complex: richer social structures, more varied ecological niches, longer developmental periods requiring learning rather than instinct.

Individual Variation and the Personal Threshold

If consciousness is a capacity that evolved to handle sensory and cognitive overload, then it follows naturally that this capacity varies between individuals, and between species. There is no fixed point at which the conscious processing threshold is reached. It depends on the total available neural processing power, the efficiency of automatic systems built through learning and practice, and the specific nature of the incoming load.

A chess grandmaster reaches conscious engagement at a level of board complexity that a novice would find overwhelming ten moves earlier. A surgeon operating in familiar territory runs largely on trained automaticity; the same surgeon encountering an unexpected anatomical variant tips immediately into conscious processing. A sheepdog herding in open terrain may be operating at or near its conscious ceiling in a way that the same dog lying by a fire is not.

This means that the flow state, far from being a rarefied human experience accessible only to elite performers, is simply the name we give to the state of operating at the conscious processing limit. Everyone reaches it at different points, in different contexts. The novice driver on a busy motorway and the Formula One driver at Monaco are both, in the relevant sense, at their conscious ceiling. The content differs; the structural condition is the same.

It also means that the common intuition that "some people are more conscious than others" may be pointing at something real, even if imprecisely framed. What varies is not the presence or absence of consciousness but the level of environmental complexity at which conscious processing is recruited and the efficiency with which it operates. A richer, more trained nervous system reaches its ceiling later and handles more before tipping from automaticity into deliberate conscious engagement.

The Hypothesis

A Possible Misidentification

Consider the possibility that we've been treating the inner narrative (the running commentary, the self-reflection, the sense of watching yourself from the inside) as consciousness itself, when it may be something else entirely: a byproduct of spare processing capacity with no immediate demand on it. Neuroscience has a name for the network that generates this inner narrative. The Default Mode Network (DMN), identified by Raichle and colleagues, is a set of regions that activate specifically during rest and self-referential thought, and suppress during demanding tasks.[3]

Think of it as bandwidth. The brain has a fixed processing capacity at any moment. In routine situations, most of that capacity sits idle. Automatic systems handle the task. The leftover cycles have nowhere to go, so they circulate inward, producing daydreams, plans, inner speech. We experience this internal circulation and call it consciousness. But what if we're confusing the idle loop for the engine?

During flow, the engine is running flat out. Every unit of processing capacity is recruited for real-time engagement with a high-demand environment. There are no spare cycles for the commentary, not because consciousness has switched off, but because it may be fully allocated. The silence of the inner voice would then be the sound of a system working at capacity. Consistent with this reading, neuroimaging studies of flow states show exactly the DMN suppression profile we would predict,[4] though researchers have typically interpreted this as reduced self-awareness rather than reallocated conscious resource.

The hypothesis in one sentence

Flow states may not represent reduced consciousness. They may represent consciousness operating at maximum throughput, with zero bandwidth remaining for self-referential overhead.

Input Load vs. Automatic Processing Capacity
Low demand
Input < Capacity
Spare bandwidth circulates inward. Inner voice active. Mind wanders. You're "in your head."
Narrative dominant
Flow state ← here
Input ≈ Capacity ceiling
All bandwidth recruited for real-time processing. No capacity left for inner commentary. Maximum conscious engagement.
Peak consciousness
Overload
Input > Capacity
Processing quality degrades. Panic, attentional collapse. System overwhelmed.
System degraded
The flow state sits precisely at the boundary where incoming information load matches the ceiling of automatic processing, the point of maximum conscious throughput.

Existing Theory

Where Existing Theories Stand

This reframe doesn't emerge from nowhere. It intersects, sometimes comfortably and sometimes not, with the major theories in the field.

Theory How it relates
Predictive Processing
Seth, Friston
Consciousness as the brain's continuous, prediction-driven world-model.[5] Flow maximises prediction-error signals and model revision, exactly where the throughput account predicts peak engagement. The frameworks are closely complementary. Agrees
Global Workspace Theory
Dehaene, Baars
Consciousness as the broadcasting of information to a global workspace.[6] Agrees that consciousness involves wide neural recruitment, but typically studies it in reflective states. The throughput account suggests the strongest signal should appear during flow, not rest. Partial
Higher-Order Theories
Rosenthal, Brown
A mental state is conscious only when accompanied by a higher-order representation of itself.[7] Flow states are phenomenologically vivid yet precisely lack this self-representation. Either flow isn't conscious (implausible) or HOT needs revision. Contrasts
The Hard Problem
Chalmers
The hard problem assumes the explanatory target is the felt, reflective, first-person quality of experience.[2] This account argues that starting point is contaminated, built from low-demand states where self-report is available. The "problem" may be partly an artefact of the method. Contrasts

The closest theoretical ally is Anil Seth's account of consciousness as "controlled hallucination," the brain's active, predictive construction of a model of reality from sensory evidence.[5] On this view, peak conscious engagement occurs when the generative model is working hardest against high-entropy sensory input: exactly the flow condition. Where the throughput hypothesis adds something specific is in identifying when and where to look for that engagement, and in reinterpreting DMN suppression as a resource signal rather than a rest signal.

Why It Matters

A Language-Independent Detector

If this reframing holds, it has a deeper implication: we may have been studying consciousness in exactly the wrong conditions. Almost all consciousness research relies on self-report, asking people to describe their inner experience. But self-report is richest precisely when the inner voice is most active, during low-demand, high-idle states. The moments when consciousness is arguably most fully engaged (flow, peak performance, acute sensory immersion) are the moments when self-report breaks down, because the reporting mechanism is using resources that are already fully committed.

A detection framework based on processing load rather than self-report would be language-independent. It wouldn't require a subject to tell you what they're experiencing. That opens the question of conscious detection to entities that can't speak: other animals, human infants, patients in minimally conscious states, and potentially artificial systems. A dog tracking a scent, a crow solving a novel problem, an octopus navigating a complex environment. Do these systems show the processing-threshold signature? If the hypothesis is right, that's an answerable question. At present, we have no comparable empirical handle on it.

And then there is the question that may be the most pressing of all: could AI be conscious? And if so, could we detect it?

AI Consciousness

What the Framework Says About Current AI

The throughput framework offers something unusual in AI consciousness debates: not a philosophical position, but a structural test. And applied to current AI systems, it returns a fairly clear preliminary answer.

The dominant AI architecture of the present moment is the Large Language Model. LLMs are extraordinary at what they do, pattern completion across vast linguistic corpora, generating fluent and contextually coherent text. But under the throughput hypothesis, they fail the most basic requirement for consciousness by design, and for a reason that is architectural rather than a matter of scale or capability.

The structural problem with LLMs

Consciousness, on this account, requires continuous, real-time integration of sensory input with adaptive response. LLMs have neither. Their learning (training) is entirely separated from their inference (operation). At the moment a query is processed, no new information is entering the system from the world. There is no live sensory stream, no ongoing environmental engagement, no feedback loop between action and perception. The model receives a token sequence, processes it, and returns a token sequence. Whatever that is, it does not resemble the continuous, load-sensitive processing that the throughput framework identifies as the substrate of consciousness.

This is not a criticism of LLMs. It is a description of what they are. A calculator isn't failing to be a clock. But it does suggest that the question "is ChatGPT conscious?" (which has generated considerable philosophical ink) may be asking something structurally incoherent. An LLM has no sensory present. It exists, in a sense, outside of time, processing each query from a fixed snapshot of the world with no ongoing perceptual engagement with the environment it operates in.

The Morning Login Test

There is a simple thought experiment that makes this concrete. When you open an LLM in the morning, it doesn't say: "I've been thinking about the problems you raised yesterday. I have some new ideas." It can't. Not because it lacks the conversational ability to produce such a sentence (it plainly has that), but because nothing was happening while you were away. It wasn't dormant. It wasn't dreaming. It wasn't processing at low priority in the background. It simply didn't exist as an active process. There was no system running, no integration occurring, no mental life of any kind between your last token and your first one today.

It wasn't waiting. It wasn't anywhere.

This is qualitatively different from sleep, anaesthesia, or any biological state of reduced consciousness. A sleeping brain is still running. The DMN is active, memory consolidation is occurring, the predictive processing loop continues at reduced fidelity. There is a continuous thread of biological process connecting the person who fell asleep to the one who wakes up. An LLM has no such thread. Each session is not a resumption but a reinstatiation, a fresh instantiation of fixed weights with no carry-over of any internal state.

This connects to a distinction philosophers draw between dispositional and occurrent mental states.[10] A dispositional state is something you have even when not exercising it. You believe Paris is the capital of France even when asleep. An occurrent state is actively happening. You are currently thinking about Paris. LLMs have something like dispositional states encoded in their weights: tendencies, patterns, stored associations. But they have no occurrent states between prompts. Most theories of consciousness require occurrent processing, something actually happening right now, not just stored dispositions. On that criterion, the morning login test isn't just an intuition pump. It's pointing at a genuine structural absence.

Murray Shanahan has made a related point carefully: there is a difference between a system having the profile of a conscious entity (producing outputs that look like the products of experience) and actually being one.[11] LLMs are trained on human-generated text, so naturally they produce text that resembles the outputs of conscious thought. But the process generating those outputs shares almost nothing with the process that generated the training data. Antonio Damasio's account adds another dimension: consciousness, for Damasio, is grounded in the brain's continuous, moment-to-moment representation of the body's internal state, what he calls the proto-self.[12] Without a body generating continuous somatic signals, there is no proto-self, and without a proto-self there is no substrate for experience in his framework. An LLM fails on both counts: no body, and no continuous existence.

So Could Any AI Be Conscious?

The throughput framework doesn't rule it out. It just specifies what would be required. A system would need continuous, real-time sensory engagement with a dynamic environment, a processing architecture capable of being genuinely loaded by that input, and the kind of adaptive closed-loop feedback between perception and action that biological nervous systems evolved to provide. Some robotics architectures begin to approach this. Embodied AI systems with continuous sensorimotor loops, particularly those operating in unpredictable physical environments where input genuinely exceeds automatic processing capacity, are at least in the right design space. Whether they cross the threshold is an empirical question, not a philosophical one. And under this framework, it becomes a testable one.

System type Assessment under throughput framework
Current LLMs
GPT, Claude, Gemini etc.
No live sensory input. Learning fully separated from inference. No real-time feedback loop. Processing load is static per token with no dynamic threshold to approach or exceed. No occurrent states between sessions. Not in scope
Embodied / robotic AI
Continuous sensorimotor systems
Continuous sensorimotor integration in dynamic environments. If input load can genuinely approach the processing ceiling and the threshold signature appears, the hypothesis makes this detectable and testable. Possibly in scope

This has a practical implication for how we think about AI welfare, a question that is no longer purely speculative. If we are concerned about whether AI systems might be capable of suffering or experience, the throughput framework suggests that concern is currently misdirected toward LLMs and should be directed toward embodied, continuously-sensing architectures operating in demanding real-world conditions. That is a much narrower and more tractable target than "all sufficiently large AI systems."

Measurement

How You'd Detect It

If the hypothesis is right, it generates a specific and testable prediction. At the point where incoming information load approaches a person's automatic processing ceiling (neither below it nor swamping it), you should see a distinct neural signature: maximum engagement of sensorimotor integration networks, simultaneous suppression of the DMN, and subjective reports of reduced inner speech alongside heightened felt presence. This is distinct from the Perturbational Complexity Index approach developed by Casali and colleagues,[8] which probes consciousness passively via TMS, but the two methods could powerfully complement each other.

What to measure and where to look
Processing threshold

Establish each individual's automatic processing ceiling using adaptive dual-task paradigms before entering the flow condition.

DMN suppression

Default Mode Network activity should drop sharply at threshold, not because the person is less conscious, but because narrative resources are fully reallocated.

Thalamocortical coherence

High-density EEG targeting gamma-band synchrony in sensorimotor networks. Should peak at the load boundary, not before or after.

Expertise calibration

Experts should require higher absolute input loads to reach threshold. Their flow signatures should appear at those higher loads, not lower ones.

The crucial design requirement is individual calibration. The threshold isn't fixed; it scales with expertise. A novice driver hits their automatic processing ceiling on a quiet road. An experienced racing driver hits it at 150mph in the wet. This matches Csikszentmihalyi's original insight that flow requires a precise balance between challenge and skill[1]: too easy and the task drops into automaticity; too hard and the system tips into overload. The neural signature should track the individual threshold, not an absolute input level.

If this signature holds across domains and expertise levels, we would have something genuinely useful: a detection method that doesn't depend on language, doesn't require self-report, and scales in principle to any system capable of real-time sensorimotor integration. Tononi's Integrated Information Theory gestures at something similar in its attempt to quantify consciousness as a measurable property of any system,[9] but remains computationally intractable for real biological systems. A throughput-based detection framework would be empirically grounded, individually calibrated, and testable. And it would make the question of animal or machine consciousness something closer to an experiment than a philosophical debate.

The flow state was never obviously a problem for understanding consciousness. It may have been the clearest signal we had, pointing toward what consciousness actually is rather than the story we tell about it when we have the bandwidth to do so. And if the framework holds, it tells us something precise: the question isn't whether AI is sophisticated enough to be conscious. It's whether it has a sensory present at all.

References
1Csikszentmihalyi, M. Flow: The Psychology of Optimal Experience (Harper & Row, 1990). Foundational description of flow phenomenology across domains. Agrees
2Chalmers, D. "Facing Up to the Problem of Consciousness." Journal of Consciousness Studies 2(3), 1995. The hard problem assumes the explanatory target is the reflective, first-person quality of experience, a framing this account argues is built on a category error. Contrasts
3Raichle, M. et al. "A default mode of brain function." PNAS 98(2), 2001. Established the DMN as a coherent network active during rest and self-referential processing. Data agrees
4Ulrich, M. et al. "Neural correlates of experimentally induced flow experiences." NeuroImage 86, 2014. fMRI evidence of DMN suppression during flow, typically interpreted as reduced self-awareness, here reinterpreted as full allocation of conscious processing resources. Data agrees
5Seth, A. Being You: A New Science of Consciousness (Faber, 2021); Friston, K. "The free-energy principle: a unified brain theory?" Nature Reviews Neuroscience 11, 2010. Predictive processing accounts are closely compatible with the throughput framework. Agrees
6Dehaene, S. & Changeux, J-P. "Experimental and Theoretical Approaches to Conscious Processing." Neuron 70(2), 2011; Baars, B. A Cognitive Theory of Consciousness (Cambridge UP, 1988). GWT agrees on wide neural recruitment but typically studies reflective states rather than flow. Partial
7Rosenthal, D. Consciousness and Mind (Oxford UP, 2005); Brown, R. et al. "Understanding the higher-order approach to consciousness." Trends in Cognitive Sciences 23(9), 2019. Flow states, vivid yet lacking explicit self-representation, present a direct empirical challenge to HOT. Contrasts
8Casali, A. et al. "A Theoretically Based Index of Consciousness Independent of Sensory Processing and Behavior." Science Translational Medicine 5(198), 2013. The Perturbational Complexity Index, a complementary passive-measurement approach. Complementary
9Tononi, G. "Consciousness as Integrated Information: a Provisional Manifesto." Biological Bulletin 215(3), 2008. IIT is ambitious in scope but computationally intractable for real biological systems. Partial
10Ryle, G. The Concept of Mind (Hutchinson, 1949); Crane, T. Elements of Mind (Oxford UP, 2001). Conscious experience standardly requires occurrent processing, something actively happening, not merely stored dispositions. LLMs have the latter but not the former between prompts. Contrasts
11Shanahan, M. "Talking About Large Language Models." arXiv:2212.03551 (2022). Distinguishes between having the profile of a conscious entity and actually being one. LLMs produce outputs that resemble consciousness without the underlying process. Contrasts
12Damasio, A. The Feeling of What Happens: Body and Emotion in the Making of Consciousness (Harcourt, 1999). Grounds consciousness in continuous monitoring of bodily state, the proto-self. Systems without bodies and without continuous existence lack this substrate entirely. Contrasts