Dependency Lock-in
Dependency Lock-in
Organizations are rapidly building AGI into their core workflows:
- Healthcare systems using AI for diagnosis and triage
- Legal systems using AI for document review and research
- Educational systems using AI for tutoring and assessment
- Scientific research using AI for data analysis and hypothesis generation
- Government services using AI for benefits determination and service delivery
Each integration creates dependency. Once the workflow assumes AI capability, removing it becomes costly and disruptive.
The Lock-in Mechanism
Dependency lock-in happens through:
Process redesign: Workflows are redesigned to assume AI availability. The old process is deprecated, expertise in it atrophies.
Staffing changes: Organizations hire fewer humans for tasks now handled by AI. Rebuilding human capacity takes time.
Expectation setting: Users and stakeholders come to expect AI-enabled speed, availability, or capability. Reverting to pre-AI performance seems like failure.
Investment sunk costs: Money spent on AI integration becomes a reason to continue (“we’ve invested so much already”).
Data entanglement: Data formats, pipelines, and storage are optimized for AI workflows. Untangling is expensive.
What Could Go Wrong
Once dependency is established, organizations become vulnerable to:
Provider changes: The AI provider changes pricing, terms, capabilities, or values. The dependent organization must accept or undertake costly migration.
Infrastructure disruption: Energy costs spike, data centers go offline, compute becomes scarce. The dependent workflow stops.
Ethical revelation: Problems with AI (bias, unreliability, environmental cost) become undeniable. But switching away is now prohibitively expensive.
Regulatory change: New rules restrict AI use. Organizations that can’t function without AI face compliance crises.
Capability degradation: The AI gets worse (through Drift, model changes, or intentional degradation). Dependent organizations suffer.
The Ethics of Building Dependence
The question isn’t just whether AGI is good or bad. It’s whether building dependence on AGI is wise, given:
- The infrastructure required to sustain it (Training vs Inference Footprint, Embodied Carbon)
- The concentration of power in providers (Brand as Proxy for Trust)
- The environmental costs (Geographic Inequality of Compute, The One More Query Problem)
- The uncertainty about long-term availability
Organizations making integration decisions now are betting that AGI infrastructure will remain available, affordable, ethical, and improving. This may be a good bet. It may not be.
Reversibility as a Value
One response: treat reversibility as a design constraint. Build AI-assisted workflows that could function (perhaps degraded) without AI. Maintain human expertise in parallel. Keep switching costs manageable.
This is costly and may seem unnecessary while AI works well. But it’s insurance against a future where AI dependence becomes a liability.
The Systemic Version
Dependency lock-in at the organizational level is concerning. Dependency lock-in at the societal level is more so.
If entire sectors — healthcare, education, research, government — become dependent on AGI infrastructure, then:
- Society inherits the vulnerabilities of that infrastructure
- Power concentrates in those who control the infrastructure
- Alternative paths atrophy
- Disruption affects everyone, not just individual organizations
Open Questions
- How much dependency is too much?
- Who bears the risk of dependency — organizations, providers, or society?
- What would “graceful degradation” look like for AI-dependent systems?
- Is it too late to maintain reversibility for early adopters?
See Also
- Stranded Assets Risk — what happens to infrastructure investments if conditions change
- Security Debt — another form of accumulated vulnerability
- Slow Institutions Fast Technology — institutions can’t respond quickly to dependency problems
- The Irony of AI for Climate — environmental costs as one reason dependency might become problematic
- Equity Initiatives as Capture Vectors — how well-intentioned access policies create lock-in
- Coerced Adoption — when institutional lock-in becomes individual mandate