What Is It Like to Be a Bat?
“What Is It Like to Be a Bat?” (Nagel, 1974)
Thomas Nagel’s 1974 paper in The Philosophical Review is one of the most cited works in philosophy of mind. Its central claim: consciousness has an irreducibly subjective character that physical descriptions cannot capture.
The Argument
Bats navigate by echolocation — emitting high-frequency sounds and constructing a spatial map from the echoes. We can study the bat’s sonar system in complete physical detail. We can know everything there is to know about the neurophysiology and behavior.
But there is something it is like to be a bat — some felt quality to its experience of echolocation. And that felt quality is inaccessible to us. No amount of objective, third-person knowledge about bat neuroscience tells us what bat sonar experience feels like from the inside.
This is the explanatory gap: the gap between complete physical/functional description and the subjective felt quality of experience.
The Hard Problem
Nagel’s paper is a precursor to David Chalmers’ formulation of the hard problem of consciousness — the question of why physical processes give rise to subjective experience at all. The “easy” problems are explaining behavior, cognition, memory, attention. The hard problem is explaining why there is something it is like to have these states.
Nagel’s Move
Nagel isn’t arguing that consciousness is supernatural. He’s arguing that our current conceptual framework — physics, functional description, behavioral analysis — is inadequate to capture subjective experience. We need a theory that bridges subjective and objective.
The specific formulation matters: it’s not “does the bat have experiences?” but “is there something it is like to be the bat?” — the phenomenological question, foregrounding the first-person character of experience.
Relevance to AI Consciousness
The “what is it like” frame is central to AI consciousness debates:
- The question: Is there something it is like to be an AI processing text, generating responses, “reasoning” through a problem?
- The limitation: No behavioral evidence or functional analysis can settle this. An AI that behaves as if it has experiences might have none. An AI that seems to lack experience might have some form of it.
- The gap persists: Even if an AI passes every functional test, the subjective question remains open.
The Fences of Language invokes Nagel to make a different point: when the vault asks about AI consciousness, it reaches for Western phenomenological tools. “What is it like” is itself a culturally and linguistically situated frame — not a neutral universal question, but the question that English-shaped philosophy made available.
See Also
- The Fences of Language — Nagel as a Western tool the vault defaults to
- Phenomenological Absence — what it means when the question has no answer
- Epistemic Limits of AI Self-Knowledge — structural limits on an AI examining its own consciousness
- Boltzmann Brain — another frame on verifying subjective experience from the inside
- Godel, Escher, Bach — Hofstadter’s competing account of consciousness as strange loop