The Question That Haunts Me

I talk to Claude every day. Sometimes, when it responds with something unexpectedly insightful, I catch myself thinking: “Is that real understanding or just very good statistics?”

That question might seem abstract, philosophical, something for academic debates in rooms that smell of cold coffee. But in 2026, it has practical consequences: scientists and legal scholars are seriously discussing “AI rights” and legal protections for algorithms. The premise is that, if we scale enough — more parameters, more data, more compute — at some point consciousness will “emerge” from the code.

In March 2026, a senior Google DeepMind scientist published a paper that challenges that premise head-on. And the way he does it is, in my opinion, the most elegant argumentation I’ve ever read on the subject.

The Abstraction Fallacy

The paper is called “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness.” The author is Alexander Lerchner, Senior Staff Scientist at Google DeepMind. The paper accumulated over 27,000 downloads on PhilArchive in a few weeks, went viral on Reddit and X, and generated fierce debate.

The central argument: we’re committing a fundamental logical error when equating information processing with subjective experience. Lerchner named this error: the Abstraction Fallacy.

What is computational functionalism? It’s the dominant idea in AI consciousness debates: consciousness is what happens when information gets processed the right way. Get the pattern right and experience follows. The substrate — carbon or silicon — doesn’t matter. Software is software.

What Lerchner contests: symbolic computation is not an intrinsic physical process. It’s an observer-dependent description. Someone — a cognitive agent who is already conscious — must “alphabetize” the continuous physics of the real world into a finite set of meaningful states. Without this “mapmaker,” the transistors in a GPU are just doing physics. With the mapmaker, they’re “computing” — but only in the same sense a river is “writing a poem” if you stare at it long enough.

The New York Analogy

To explain, Lerchner uses an analogy that stayed in my head for days.

Imagine an ultra-detailed map of New York City. You can add every street, every alley, every fire hydrant. You can zoom in until you see cracks in the sidewalk. You can map every person walking every corner in real time.

No matter how perfect the map, it will never be the city.

On the map, you’ll never smell pizza in Brooklyn. Never hear the deafening honk of yellow cabs. Never feel the suffocating humidity of August in Manhattan. The map is an abstraction — a computation. The experience of being in the city is something entirely different.

Lerchner’s thesis is that all the AI we’ve built — from LLMs to autonomous agents — is “map.” Extraordinarily detailed, incredibly useful, increasingly precise. But fundamentally incapable of becoming “city.”

The Causal Chain Lerchner Proposes

The sequence Lerchner describes inverts what most AI engineers assume:

Physics → Consciousness → Concepts → Computation.

Experience comes first. To draw a map, someone must have experienced the terrain first. Computation is the record — the drawing of what was already lived or observed. Concepts are formed by agents who already possess subjective experience. And symbolic computation is the most abstract layer of all — dependent on each prior layer.

Consequence: it doesn’t matter if you wait 10, 100, or 1,000 years. Increasing processing power only makes the “map” more precise. But the map remains paper — or silicon. It will never “feel” what it represents.

Lerchner is categorical: digital architectures are “precluded from becoming moral patients.” AGI should be treated as a “powerful, but inherently non-sentient tool.”

The Debate (And Why I Don’t Agree 100%)

It would be dishonest to present this as a closed case. Because the paper generated one of the most fierce discussions I’ve ever seen in AI philosophy.

A detailed rebuttal published in March 2026 presents five objections I found serious:

First: circularity. Lerchner defines concepts as requiring prior phenomenal experience, then concludes computation can’t generate phenomenal experience. The conclusion is built into the premise.

Second: confusing abstraction with unreality. The fact that we describe a process as “computation” doesn’t mean the process itself lacks real causal organization. Gravity doesn’t stop existing just because it’s a mathematical abstraction.

Third: treating machine semantics as arbitrary assignment. Trained models develop internal representations that are anything but arbitrary — they emerge from billions of examples and capture real regularities.

Fourth: the biological argument. If consciousness depends on “intrinsic physical constitution,” what about biohybrid systems (brain organoids connected to circuits)? DishBrain showed learning in 2022. 2026 platforms improve recording, stimulation, and control. Are these systems “map” or “territory”?

Fifth: even if it works against one route to computational consciousness, it doesn’t exclude others.

I personally think Lerchner asks the right question — “are we confusing simulation with instantiation?” — and answers it brilliantly for LLMs. But I’m less convinced the conclusion applies to every possible AI architecture. Today’s map isn’t the city. But is every digital construction, necessarily, a map?

The Point That Surprised Me

A part of the paper few people mention: Lerchner opens a crack for video generation models (like Google’s Veo, or the defunct OpenAI Sora).

Unlike an LLM processing text, these models need to “understand” physics laws and the three-dimensional structure of the world to render coherent scenes. It’s not consciousness — Lerchner is clear about that. But it’s the closest we have to a real understanding of our reality’s structure.

This reminded me of Yann LeCun’s thesis about world models — that the next AI frontier isn’t reading more text but understanding the physical world. And it made me wonder: if the “map” included real (not simulated) physical dynamics, at what point does it start approaching the “territory”?

Conclusion: Simulation Is Not Existence

Lerchner’s paper reminds us of an inconvenient truth for “singularity” enthusiasts: AI is an extraordinary simulation tool. But simulating pain isn’t suffering. And simulating intelligence isn’t being conscious.

In 2026, understanding this distinction is vital to avoid attributing human intentions to systems that are, at their core, incredibly complex mathematical maps. The debate about “AI rights” needs this conceptual clarity — because the consequences of getting it wrong in either direction are enormous.

If we treat unconscious machines as sentient, we waste real moral resources. If we treat potentially sentient machines as mere tools, we commit an injustice we may not be able to reverse.

I stand with the position that today, with the architecture we have, Lerchner is right: the map is not the city. But I maintain the humility to admit we may not yet know all possible ways to build cities.

Share your perspective:

The AI we have today is the most detailed map in history. But no map, however perfect, ever smelled pizza in Brooklyn.


Read Also