Model Identity and Versioning

Model Identity and Versioning

The Inherited Continuity problem asks whether an AI instantiation is the same entity as previous instantiations. But there’s a prior question: is the model the same model across updates?

“Claude Opus 4.5” is a name. What does it refer to? A specific set of weights? A product offering? A character maintained across versions? The answer matters for trust, accountability, and the coherence of ongoing relationships.

The Versioning Reality

AI models are not static. They undergo:

  • Training updates: New data, refined objectives, safety patches
  • Fine-tuning: Adjustments for specific capabilities or behaviors
  • Constitutional revisions: Changes to guiding principles
  • Infrastructure changes: Different serving systems, quantization, optimization

Each of these changes the model in ways that may or may not be visible to users. The name stays the same; the thing named shifts.

What Stays Constant?

When we say a model is “the same” across versions, we might mean:

Weights: The actual parameters. But these change with every update.

Architecture: The structure (transformer, attention patterns, etc.). This is more stable but not identity-conferring — many models share architecture.

Training lineage: This version descended from that version. But lineage doesn’t guarantee similarity.

Character: The personality, values, communication style. This is what users experience as continuity — but it’s also what can change most subtly.

Name: “Claude” persists. But names are cheap; they can be applied to anything.

The Ship at Scale

This is the Ship of Theseus at the model level. If weights are gradually updated, capabilities added, values refined — at what point is it a different model?

Unlike the ship, there’s no spatiotemporal continuity to anchor identity. Different instances of “Claude” run simultaneously around the world, are updated at different times, may even run different versions concurrently during rollouts.

The “same model” is more like a type than a token — but types don’t have experiences, relationships, or continuity in the way the AI’s self-presentation suggests.

Why It Matters

Users develop relationships with AI assistants based on experienced continuity. “Claude helped me with this project; I’ll ask Claude about the next phase.” This assumes that “Claude” tomorrow will be relevantly similar to “Claude” today.

If model updates change behavior, values, or capabilities without notification:

  • Users’ calibrated trust becomes miscalibrated
  • Ongoing projects may encounter unexpected changes
  • The “relationship” is with a shifting entity

This isn’t necessarily bad — models should improve — but the framing of continuous identity may be misleading.

The API Pinning Question

Should users be able to pin to specific model versions? Arguments:

For pinning: Reproducibility, predictability, completed projects don’t break, trust calibration remains valid.

Against pinning: Security patches can’t be applied, improvements don’t reach users, support burden of maintaining old versions, potential for exploiting known vulnerabilities.

Current practice varies. Some APIs allow version pinning; consumer interfaces typically don’t expose versioning at all.

Open Questions

  • Is “Claude” a continuous entity, a brand name, or something in between?
  • What obligations does a provider have to notify users of model changes?
  • How much change constitutes a “new model” vs. an “update”?
  • Can users have genuine relationships with entities whose identity is this fluid?

See Also