Consequentialist Calculus
Consequentialist Calculus
Consequentialism judges actions by their outcomes. For AI ethics, this means weighing:
- Benefits delivered (knowledge, productivity, access, discovery)
- Harms caused (environmental impact, job displacement, misuse)
- Alternative uses of resources (what else could be done with the energy, talent, capital)
- Counterfactual (what would happen without AI)
The calculus sounds straightforward. In practice, it’s nearly impossible.
Why the Calculus Is Hard
Measurement: Many relevant outcomes can’t be measured. How do you quantify “accelerated scientific discovery” against “environmental damage”?
Attribution: AI contributes to outcomes alongside many other factors. Isolating AI’s contribution is often impossible.
Counterfactuals: We don’t know what would have happened without AI. The baseline is imaginary.
Distribution: Benefits and harms fall on different people. Aggregating across people raises philosophical problems.
Uncertainty: Future consequences are unknown. We’re reasoning about probability distributions over outcomes.
Incommensurability: Some outcomes may not be comparable. Is “more efficient drug discovery” comparable to “displaced call center workers”?
The Individual Version
For individual decisions (should I use AI for this task?), consequentialism runs into The One More Query Problem:
- Individual contribution to aggregate harm is negligible
- Individual benefit from AI use is tangible
- Rational individual calculus says: use AI
- Aggregate outcome of everyone reasoning this way may be harmful
Individual consequentialism and collective consequentialism can give different answers.
The Systemic Version
For systemic decisions (should society develop/deploy AI?), consequentialism requires:
- Predicting AI’s effects across all domains
- Comparing to alternatives (other technology investment, no investment)
- Weighting effects across all affected parties
- Handling deep uncertainty about transformative technology
No one can actually do this calculation. We make decisions anyway.
Living with Incalculability
Given that rigorous consequentialist calculation is impossible, what should we do?
Heuristics: Use rules of thumb that tend to produce good outcomes. Don’t try to calculate each case.
Deontology: Fall back on rights and duties rather than consequences. Some things shouldn’t be done regardless of outcomes.
Virtue ethics: Focus on developing good character and judgment rather than calculating outcomes.
Procedural justice: Focus on fair processes rather than outcome optimization.
Satisficing: Aim for “good enough” outcomes rather than optimal ones.
Pluralism: Use multiple frameworks and look for convergence or robust conclusions.
Implications
- Consequentialist justifications for AI should be held tentatively
- “Net positive” claims are harder to establish than they seem
- Multiple ethical frameworks may be needed
- Uncertainty counsels humility about confident claims
Open Questions
- Is consequentialist calculation ever possible for technology this broad?
- How should we decide when we can’t calculate consequences?
- Should individual and collective consequentialism get different weight?
- Can heuristics replace calculation without losing important considerations?
See Also
- The Irony of AI for Climate — a case study in incalculable consequences
- The One More Query Problem — individual vs. collective reasoning
- Dependency Lock-in — consequences unfold over time