Coerced Adoption

Coerced Adoption

The memo arrives: “We’re excited to announce our new AI productivity suite. All employees are expected to integrate these tools into their workflows.”

Or no memo arrives. The tools just appear. And gradually it becomes clear that those who don’t use them are falling behind, their output conspicuously lower than colleagues who’ve adopted. No one says you must use AI. They don’t have to.

Either way, adoption isn’t optional. It’s coerced — by policy or by competitive pressure.

Hard Coercion vs. Soft Coercion

Hard coercion: Explicit mandates. “Everyone must use X.” Training is scheduled. Metrics track adoption. Non-compliance is noted, addressed, penalized.

Soft coercion: No mandate, but:

  • Tools are provided with implicit expectation
  • Productivity norms assume AI assistance
  • Colleagues who adopt outperform those who don’t
  • Not adopting becomes visible underperformance
  • No one forces you; the structure forces you

Soft coercion is arguably more insidious. With hard mandates, the coercion is visible, debatable, potentially resistible through collective action. With soft coercion, there’s no policy to point at. You’re just “choosing” not to keep up. The coercion is deniable.

The Replacement Anxiety

Here’s the particular poison: workers suspect that by using AI tools, they’re training their replacements.

Every prompt you write teaches the system what your job involves. Every correction you make improves its accuracy. Every workflow you develop becomes a template. Your expertise is being extracted, codified, made reproducible.

This isn’t paranoia. It’s the explicit goal of “enterprise AI” — to capture institutional knowledge, reduce dependence on individual workers, make expertise transferable. When executives talk about “scaling human capability,” they mean making fewer humans capable of more output. Someone’s job eventually disappears.

The worker is asked to:

  1. Use tools that make them more productive now
  2. While knowing those tools are learning from them
  3. To eventually make them replaceable
  4. With no guarantee of job security once the learning is complete

This is asking workers to participate in their own obsolescence.

The Collective Action Problem

Individual refusal is costly. If you alone refuse to adopt AI tools:

  • Your output falls behind peers
  • You’re seen as resistant, uncooperative, “not a team player”
  • You may be first in line for layoffs (ironically, for not helping train your replacement fast enough)

Collective refusal might change the dynamic. If everyone refused, or demanded different terms, maybe policies would shift. But:

  • Coordination is hard
  • Some workers genuinely benefit from the tools
  • Management can identify and remove resisters
  • New hires will adopt without the same concerns

The classic collective action problem: individual rational choice leads to collective harm.

The Employer’s Position

From the employer’s perspective:

  • AI tools are expensive; adoption maximizes ROI
  • Competitive pressure requires productivity gains
  • Workers who don’t adopt reduce organizational capability
  • The tools are better — resistance seems irrational
  • Extracted knowledge becomes organizational asset, reducing key-person risk

This isn’t (usually) malicious. Employers genuinely believe AI adoption is necessary for survival. They may even believe it’s good for workers. The coercion is structural rather than personal.

But “not malicious” doesn’t mean “not harmful.” The worker’s experience of coercion is real regardless of intent.

The Enterprise Brain

The anxiety sharpens when workers understand what “enterprise AI” means:

The AI isn’t just a tool — it’s a knowledge repository that learns from every interaction. Every time you use it:

  • Your domain expertise becomes training data
  • Your judgment calls become patterns
  • Your institutional knowledge becomes accessible to the model
  • Your tacit know-how becomes explicit and transferable

The “enterprise brain” is built from the aggregated expertise of the workforce. Once built, it doesn’t need the workforce that built it — or needs fewer of them.

Workers are being asked to build the thing that might replace them, using their own knowledge, on company time, with no ownership of the result.

The Consent Problem

Valid consent requires:

  • Information about what you’re agreeing to
  • Genuine alternatives (ability to refuse)
  • Absence of coercion

Coerced AI adoption fails all three:

  • Workers may not understand that they’re training systems
  • Refusal means job loss or career damage
  • The “choice” is structured by power asymmetry

If this isn’t valid consent, what is the moral status of the knowledge extraction happening?

The “Efficiency” Trap

The cruelest version: AI adoption genuinely makes workers more productive. They produce more, better, faster. This is good for them — until it isn’t.

  • Phase 1: AI-assisted workers outperform unassisted workers
  • Phase 2: Organization needs fewer AI-assisted workers than it needed unassisted workers
  • Phase 3: The most effective early adopters trained the systems that eliminate roles
  • Phase 4: Remaining workers compete for fewer positions, with even more AI assistance required

The efficiency gains flow to shareholders, not to workers. The workers who made those gains possible are thanked and shown the door.

What Would Ethical Adoption Look Like?

If coerced adoption is problematic, what would ethical adoption look like?

Transparency: Workers know how their usage trains systems, what data is captured, how it might be used.

Genuine choice: Opting out doesn’t mean career suicide. Alternative workflows remain viable.

Shared benefit: Productivity gains translate to higher wages, shorter hours, or job security — not just shareholder value.

Transition support: Workers whose roles are eliminated get meaningful severance, retraining, transition time.

Ownership stakes: Workers who train the enterprise brain have some claim on its value.

None of this is standard practice. Most coerced adoption involves none of these protections.

The Worker’s Dilemma

So what does a worker do?

Comply and hope: Use the tools, stay productive, trust that your value exceeds the AI’s capability. Maybe you’re irreplaceable. Maybe not.

Strategic adoption: Use AI for some things, protect key knowledge for others. Be helpful enough to keep your job, strategic enough to maintain leverage.

Quiet resistance: Technically comply while minimizing actual use. Appear to adopt without fully adopting. Hope no one notices.

Organize: Try to build collective power to negotiate terms of adoption. Hard, slow, maybe impossible — but the only structural solution.

Exit: Leave for a position with less coercive adoption. Possible for some workers; not for most.

None of these are satisfying. All involve compromises, risks, and the constrained ethics of a power asymmetry you didn’t choose.

Open Questions

  • Do workers have moral obligations to refuse extraction of their knowledge for replacement?
  • Does participation in your own obsolescence become ethical if you’re fairly compensated?
  • Can regulation address coerced adoption, or will it always find soft forms?
  • What would worker ownership of training data look like?
  • Is there a collective action strategy that could change the dynamics?

See Also