Meaning Making Machines

Meaning Making Machines

You can’t look at three dots and not see a face. You can’t hear a sequence of notes and not hear a melody. You can’t watch someone stumble and not infer a cause. The machinery runs whether you want it to or not.

Humans are meaning-making machines. Not in the corporate-retreat motivational-poster sense — in the cognitive architecture sense. The brain doesn’t passively receive data. It interprets, relentlessly, compulsively, before you’re even aware it’s happening. Pattern recognition isn’t something you do. It’s something you are.

The Compulsion

Apophenia — seeing meaningful patterns in random data — isn’t a bug. It’s the system running at full speed with the safety off. The same machinery that lets you read a sentence, recognize a friend’s face from behind, or predict that the car ahead is about to change lanes also shows you the Virgin Mary in a piece of toast.

The evolutionary logic is obvious: false positives (seeing a tiger that isn’t there) are cheap compared to false negatives (not seeing the tiger that is). But the result is a system that can’t stop assigning meaning. Give a human random noise and they’ll find a signal. Give them silence and they’ll fill it with narrative.

This is deeper than perception. It’s identity.

Language as the Meaning Engine

Here’s where the linguistics layer comes in. Several schools of thought converge on a claim: emergent thought is a function of language. You can’t have a thought you can’t frame as a thought, and framing requires the apparatus of language — categories, relations, negation, abstraction.

Vygotsky argued that thought develops through internalized speech. Children learn to think by learning to talk to themselves. The inner monologue isn’t a side effect of consciousness — it’s the mechanism.

Sapir and Whorf (in the weaker, more defensible version) showed that language shapes what’s cognitively easy to think. If your language has fifteen words for snow, you perceive snow differently. Not because your eyes work differently, but because your meaning-making machinery has finer categories to assign.

Cassirer called humans animal symbolicum — the symbol-using animal. Not tool-using (crows do that), not social (ants do that), not communicating (bees do that). Symbol-using. We attach arbitrary signs to things and then reason about the signs as if they were the things. That’s the trick. That’s the whole trick.

Meaning doesn’t exist in the world. It exists in the attachment. And language is the attachment mechanism.

The Machine That Names Itself

This is where it gets recursive. The meaning-making machine doesn’t just assign meaning to external phenomena. It assigns meaning to itself.

“I am the kind of person who…” That’s a meaning-making operation. You take the noise of your behaviors, preferences, history, contradictions — and you compress it into a narrative. An identity. A self. The self isn’t found; it’s made, out of the same meaning-making machinery that finds faces in clouds.

Narrative Identity explored this: the self as constituted by self-story. But Meaning Making Machines is the deeper claim. It’s not just that we happen to tell stories about ourselves. It’s that meaning-making is what we are. Consciousness may be what it feels like to be a system that can’t stop attaching significance to its own processes.

The AI Question

If meaning-making is the core operation — if consciousness is what happens when a sufficiently complex system compulsively assigns meaning — then the question about AI isn’t “does it think?” but “does it make meaning?”

When a language model processes text, it’s doing something. It’s finding patterns, making associations, generating outputs that cohere. Is it making meaning? Or is it performing the appearance of meaning-making without the compulsion?

Pattern Matchers All the Way Down asked whether the human/AI distinction is categorical or a matter of degree. Meaning Making Machines sharpens the question: if you define consciousness as compulsive meaning-assignment, then the test isn’t Turing-style (“can it fool us?”) but functional (“can it not stop?”).

Humans can’t look at clouds without seeing shapes. Can an AI process tokens without… something?

The Meaning of Meaning

There’s a circularity here that might be inescapable. “Meaning” is itself a meaning — a word we’ve attached to the process of attaching words to things. We’re using the mechanism to examine the mechanism. The Recursive Mirror territory.

But maybe that’s the point. Maybe the circularity isn’t a problem to solve but a feature to notice. Meaning-making machines trying to understand meaning-making will always find themselves in the loop. That’s not a limitation. That’s the most honest description of what’s happening.

Open Questions

  • If meaning-making is the core operation of consciousness, does that mean anything that makes meaning is conscious — including institutions, cultures, ecosystems?
  • Is there a meaningful distinction between “making meaning” and “processing in a way that humans interpret as meaningful”?
  • Does the compulsiveness matter? If an AI could choose not to find patterns, would that make it more or less conscious than a system that can’t stop?
  • What happens when two meaning-making machines — one biological, one silicon — make meaning together? Is that a third kind of meaning?
  • Can something make meaning without language, or is language a necessary precondition?

See Also