The AI Layoff Trap - Game Theory Says Everyone Loses

A UPenn-BU paper models AI-driven layoffs as a Prisoner's Dilemma: each firm wins by automating, but when everyone does it, collapsing demand makes every firm worse off. Their proposed fix is a Pigouvian tax on automated tasks.

The AI Layoff Trap - Game Theory Says Everyone Loses

TL;DR

  • A March 2026 paper by UPenn and Boston University researchers models AI layoffs as a Prisoner's Dilemma where rational individual automation decisions produce collectively irrational outcomes
  • The math shows that beyond a competitive threshold, every firm over-automates relative to what would maximize industry profits - and the result harms both workers and owners
  • Real-world data tracks: 55,000 AI-attributed layoffs in 2025, 52,050 tech cuts in Q1 2026, Salesforce cut 4,000 support roles, and a Quinnipiac poll shows 70% of Americans think AI will shrink job opportunities
  • The paper's only viable fix: a Pigouvian tax on automated tasks - UBI, profit-sharing, and retraining all fail to break the cycle

Every company that replaces workers with AI is making a rational decision. The problem is that when every company makes the same rational decision simultaneously, they collectively destroy the customer base that sustains them all.

That's the central argument of "The AI Layoff Trap," a paper by Brett Hemenway Falk (University of Pennsylvania) and Gerry Tsoukalas (Boston University) that formalizes what economists have been hand-waving about for years: the demand-side externality of mass automation.

The model

The framework is a Nash equilibrium game. Symmetric firms independently choose how much labor to automate. Each firm's cost savings from automation increase its competitive position. But automated workers don't buy products. When enough firms automate enough roles, aggregate demand falls across the sector.

In the frictionless case - no integration costs, perfect automation - the framework reduces to a classic Prisoner's Dilemma. Full automation is the strictly dominant strategy for every individual firm. Mutual restraint would yield higher profits for everyone. But no firm can credibly commit to restraint because defection is always individually rational.

The key result (Proposition 1): when the number of competing firms exceeds a critical threshold, each firm's equilibrium automation level exceeds the cooperative optimum. Proposition 2 proves that this over-automation creates deadweight loss harming both workers and owners - the outcome is Pareto-dominated by cooperation.

Or in plain terms: the race to cut costs creates a race to the bottom where nobody has customers.

The numbers tracking the theory

The model isn't just theory. The real-world data is moving in the direction the paper predicts.

Layoffs accelerating: Challenger, Gray & Christmas data shows 55,000 US layoffs explicitly attributed to AI in 2025. In Q1 2026, the tech sector alone cut 52,050 jobs - up 40% year-on-year - with AI leading all cited reasons for March cuts at 15,341.

Companies replacing, not just cutting:

  • Salesforce cut 4,000 customer support roles (from 9,000 to 5,000) after CEO Marc Benioff confirmed that 50% of support interactions are now handled by AI agents. Support costs fell 17%.
  • IBM initially froze 7,800 positions that AI could replace. Its AskHR agent now automates 94% of routine HR tasks. (IBM partially reversed course in February 2026, tripling entry-level hiring.)

Public perception shifting fast: A Quinnipiac University poll from March 2026 found 70% of Americans believe AI advances will lead to fewer job opportunities - up from 56% the previous year. Gen Z was most pessimistic at 81%.

Unemployment steady - for now: The BLS reported 4.3% unemployment in March 2026, down from 4.4% in February. But the decline was partly because ~400,000 people left the labor force entirely (participation fell to 61.9%, lowest since November 2021). Long-term unemployment is up 300,000+ year-on-year.

Why the usual fixes don't work

The paper's most provocative contribution is systematically showing why standard policy proposals fail to break the cycle:

UBI (Universal Basic Income): Maintains consumer demand but doesn't change the automation incentive. Firms still over-automate because each one captures the cost savings while the tax burden is shared. The externality persists.

Capital income taxes / profit-sharing: Redistribute returns from automation but don't reduce the marginal incentive to automate an additional task. The Prisoner's Dilemma structure remains intact.

Worker equity: Aligns worker and firm interests within a single company but doesn't address the cross-firm demand externality. Your workers holding your stock doesn't help when your customers are getting laid off by other companies.

Upskilling and retraining: Shifts workers between tasks but doesn't resolve the structural oversupply of automation relative to the social optimum. Also assumes new tasks are being created fast enough, which the paper doesn't take for granted.

The Pigouvian tax

The paper's proposed solution is a Pigouvian tax on automated tasks - specifically, charging each firm for the demand loss it imposes on competitors by automating.

The optimal rate: tau = l(1 - 1/N)*, where l is the labor cost and N is the number of competing firms. It operates on the per-task margin where the externality resides. Revenue can fund retraining programs, making the tax potentially self-limiting: as displaced workers move into new roles, the tax base shrinks.

The authors argue this is the only mechanism in their framework that actually eliminates the over-automation externality. Everything else either redistributes the consequences or fails to change the underlying incentive at the margin.

The pushback

The paper has drawn significant criticism.

Bank of America Research rejected what it calls the "apocalyptic narrative," arguing AI boosts productivity and grows the economy rather than shrinking it.

Oxford Economics (January 2026) argued macroeconomic data doesn't support structural employment shifts from automation. Their read: firms are "dressing up layoffs as a good news story" - using AI as cover for correcting over-hiring from the 2020-2022 boom.

Goldman Sachs economist Ronnie Walker stated there is "no meaningful relationship between productivity and AI adoption at the economy-wide level."

Wharton professor Peter Cappelli noted research shows firms announce "phantom layoffs" that never materialize, arbitraging positive stock-market reactions to headcount reduction announcements.

A Fortune/NBER survey of 750 CFOs projected AI-related job losses of roughly 502,000 roles in 2026 - about 0.4% of total US employment. Goldman Sachs invoked Solow's Paradox: the technology is everywhere but in the productivity statistics.

The question the model doesn't answer

The paper's framework assumes a fixed set of tasks. If AI creates new kinds of work faster than it destroys old kinds, the demand externality weakens. Every previous automation wave - from looms to ATMs to spreadsheets - ultimately created more jobs than it eliminated. The question is whether this time the displacement speed outpaces the creation speed.

The 70% in the Quinnipiac poll think it does. The economists at Goldman and Oxford think it doesn't. The game theory says the answer matters less than whether firms coordinate their response - and history suggests they won't.


Sources:

The AI Layoff Trap - Game Theory Says Everyone Loses
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.