4 Billion Years On

What Did Alan Turing have to say about AI and Consciousness, and where are we now?

Cover Image for What Did Alan Turing have to say about AI and Consciousness, and where are we now?
Chris
Chris

In 1950, a mathematician working in Manchester typed a question that has refused to go away: Can machines think?

He was Alan Turing. The man who had already cracked the Enigma code, helped shorten a world war, and laid the theoretical foundations for the modern computer. By any measure, he had earned the right to ask big questions. And this was about as big as they came.

Seventy-five years later, we are still arguing about the answer.

The Question He Refused to Answer Directly

Turing's landmark 1950 paper, Computing Machinery and Intelligence, opens with that famous question and then immediately sidesteps it. He found it too slippery. What does "think" even mean? Define it too narrowly and you exclude half of what humans do. Define it too broadly and you include the thermostat on your boiler.

So instead, Turing replaced the question with a practical test. He called it the Imitation Game.

The setup was elegantly simple: a human interrogator sits in a room exchanging text messages with two other parties, one human, one machine. The interrogator's job is to figure out which is which. The machine's job is to make that as difficult as possible. If a computer could fool a reasonably informed person into thinking it was human, Turing argued, we'd have good enough grounds to call it intelligent, whatever "intelligent" ultimately means.

The test was not designed to prove consciousness. Turing was careful about that. He was offering a way to sidestep the unsolvable and focus on the observable: behaviour. If it walks like a duck and talks like a duck, that's at least worth taking seriously.

The Nine Objections, and Why They Still Sound Familiar

What makes Turing's paper remarkable isn't just the test. It's that he spent most of it dismantling every objection to the idea of thinking machines, and doing so with a wit that still lands.

He considered nine counterarguments. Some were theological: only God can create thought, therefore machines cannot think. Turing dealt with this briskly, noting that humans also create new minds through reproduction without anyone accusing them of divine overreach.

Some were mathematical: Gödel's incompleteness theorems prove there are things machines can never do. Turing acknowledged the limits but pointed out we had no proof those same limits didn't apply to humans too.

But the most important objection, and the one that still sits at the centre of every AI debate today, was what he called the Argument from Consciousness.

The argument, put by a surgeon named Geoffrey Jefferson in a 1949 lecture, runs roughly like this: a machine can't truly think unless it does so from genuine feeling and emotion. It has to actually experience what it's doing, not merely simulate the outputs of experience. Jefferson's formulation was that a machine couldn't write a sonnet because of "thoughts and emotions felt," only through the mechanical fall of symbols.

Turing's response was characteristically sharp. How, he asked, do any of us actually know that another person has inner experiences? We assume they do, because they behave as if they do, and because refusing to make that assumption would make all human communication rather awkward. He noted that the consciousness argument "appears to be the most logical view to hold," but that if you followed it to its conclusion, the only mind you could ever really be sure existed was your own.

He didn't claim machines were conscious. He just pointed out that we had no reliable way of checking, for machines or, strictly speaking, for anyone else.

Then he added, almost in passing: "I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned."

That sentence has aged remarkably well.

What He Predicted

Turing was also, by the standards of 1950, wildly optimistic.

He predicted that within fifty years, by the year 2000, it would be possible to programme computers to play the imitation game so well that an average interrogator would have no more than a 70% chance of correctly identifying the machine after five minutes of questioning. He thought the problem was mainly one of programming rather than hardware, and that the storage capacity required would be well within reach.

He was right about the hardware trajectory. He was slightly off on the timeline. But he wasn't wrong in spirit.

Where We Actually Are

In 2024, researchers at UC San Diego ran a rigorous, pre-registered version of the Turing test with real participants. GPT-4 was judged to be human 54% of the time, essentially at the threshold where people couldn't reliably tell the difference. By 2025, GPT-4.5, when prompted to adopt a human-like persona, was judged to be the human rather than the actual human participant 73% of the time. Not a marginal result. Not a rounding error.

By the behavioural definition Turing proposed, the test has been passed. The machine has won the imitation game.

So that's settled, then. Machines can think.

Except, of course, it isn't that simple at all.

The Problem Turing Left Unresolved

Passing the Turing test tells us something meaningful about conversational capability. It tells us very little about inner experience.

John Searle, the philosopher, spent years making this case with his Chinese Room thought experiment. Imagine someone locked in a room with a rulebook for responding to questions written in Chinese. They don't speak Chinese. They just follow the rules, match this symbol, return that symbol, and to everyone outside the room appears to understand Chinese perfectly. The room is a computer. The rulebook is the program. The point is that sophisticated symbol manipulation doesn't necessarily produce understanding. It produces the appearance of understanding.

This is the gap Turing's test was never designed to bridge. It measures outputs, not experience. A system could be what philosophers call a philosophical zombie: behaviourally indistinguishable from a conscious entity while experiencing absolutely nothing inside.

Whether that's what current AI is, nobody actually knows.

A Cambridge philosopher, Tom McClelland, published a paper in late 2025 arguing that we may never be able to tell. Our evidence for what constitutes consciousness is too limited, he wrote, and a reliable test for AI consciousness is unlikely to be developed in the foreseeable future. The two camps in the debate, those who believe consciousness is substrate-independent and could run on silicon as easily as neurons, and those who believe it requires biological embodiment, both take a leap of faith that no available evidence can justify.

A 2024 survey of neuroscientists and AI researchers found the scientific community roughly split and mostly honest about its uncertainty. Around two-thirds thought artificial consciousness was plausible under certain computational conditions. Around 20% said current definitions were too vague to even frame the question. A small minority flatly rejected it on biological grounds.

David Chalmers, the philosopher who coined the phrase "the hard problem of consciousness," has written that current large language models lack key features like unified working memory and genuine self-modelling, but that future architectures might not. He doesn't claim today's AI is conscious. He doesn't rule it out for tomorrow's.

What Turing Would Make of It

It's tempting to imagine Turing in front of a laptop in 2026, typing a few questions into Claude or ChatGPT and then sitting back with an expression of quiet satisfaction. The imitation game is being played at scale, every day, by millions of people, most of whom aren't even thinking of it as a test.

But the more interesting question is what he would make of the consciousness debate, the part of his paper he deliberately left unresolved.

His instinct, based on everything in that 1950 text, was pragmatic. He found philosophical debates about the inner lives of other entities largely unresolvable and not particularly useful. He thought the question "can machines think?" was probably too vague to deserve a direct answer, but that the question "can machines behave in ways we associate with thinking?" was both precise and answerable, and that the answer would eventually be yes.

He was right about that. What he couldn't have known was that getting there would make the original question feel more urgent, not less.

Because now we have systems that write poetry, hold conversations, reason about abstract problems, and apparently convince people they're human more reliably than actual humans do. And we still can't tell, not from the outside, not from the inside, not from any test we currently have, whether there is anything it is like to be them.

The imitation game has been won. The mystery Turing acknowledged in that single careful sentence is still, entirely, a mystery.

Alan Turing died in June 1954, aged 41. His 1950 paper, "Computing Machinery and Intelligence," was published in the journal Mind and is freely available online. It remains one of the most readable and prescient documents in the history of science.

Share this post