Anthropic Tracks AI Job Risk - Young Workers Feel It First

Anthropic's new 'observed exposure' metric ranks 800+ occupations by actual AI usage, not just theoretical risk. Computer programmers top the list at 75%. Unemployment hasn't spiked - but young workers entering exposed fields are finding fewer jobs.

Anthropic Tracks AI Job Risk - Young Workers Feel It First

Anthropic's own economists have quietly built the most detailed map yet of which jobs AI is actually touching - not which jobs it theoretically could touch. The paper, published March 5, is also a data problem that the AI industry hasn't had a clean answer to until now.

TL;DR

  • Anthropic introduces "observed exposure" - a metric combining real Claude usage data, O*NET task descriptions, and theoretical feasibility scores to measure actual AI impact on jobs
  • Computer programmers top the list at 75% task coverage; customer service reps follow at 70%
  • A 10-percentage-point rise in exposure corresponds to a 0.6-point drop in BLS projected job growth
  • Unemployment in exposed fields hasn't moved significantly since ChatGPT launched in late 2022
  • Young workers (22-25) entering AI-exposed occupations saw roughly a 14% decline in job-finding rates

The paper is titled "Labor market impacts of AI: A new measure and early evidence", and it was written by Anthropic economists Maxim Massenkoff and Peter McCrory. Their core argument is that the existing literature on AI job risk is stuck measuring theoretical capability - what AI could do - rather than what it actually does. They set out to fix that.

A New Way to Measure Exposure

Previous research, including the widely cited Eloundou et al. 2023 paper from OpenAI, scored jobs based on whether a skilled human using an LLM could complete the underlying tasks faster. Useful, but basically a capability audit. Massenkoff and McCrory wanted to know what Claude is being used for right now, in professional contexts, at scale.

The Three Inputs

Their "observed exposure" metric combines three data sources:

  • O*NET occupational data - the U.S. Department of Labor's database covering 800+ jobs and their component tasks
  • Claude usage logs - actual prompts submitted in work-related contexts, classified into which occupational tasks they correspond to
  • Eloundou et al. feasibility scores - as a filter to ensure the theoretical possibility floor is met before counting observed usage

Jobs score higher when their tasks appear frequently in automated, work-related Claude usage, not just assistive usage. An engineer who uses Claude to look up documentation scores lower than one whose entire coding workflow is being handled by Claude agents.

The Gap Between Theory and Practice

The resulting exposure estimates are notably lower than theoretical models predict. For computer and math workers, prior models estimated 94% of tasks are theoretically within AI's reach. Observed exposure currently puts actual Claude coverage at 33% of those tasks. Office and administrative roles sit at roughly 90% theoretical but a fraction of that in actual use.

About 30% of occupations don't register any exposure at all - cooks, motorcycle mechanics, lifeguards, dishwashers. Their tasks simply don't show up in professional AI usage data. That's not surprising, but it's worth stating clearly: AI's reach into the economy is still heavily skewed toward white-collar, screen-based work.

A programmer writing code at a laptop Computer programmers scored the highest exposure rate in Anthropic's study - 75% of their tasks overlap with observed Claude usage. Source: unsplash.com

Who's Most Exposed

The top ten occupations by exposure score, from the CBS News reporting on the paper:

OccupationExposure Score
Computer programmers75%
Customer service representatives70%
Data entry keyers67%
Medical record specialists67%
Market research analysts65%
Sales representatives63%
Financial and investment analysts57%
Software quality assurance analysts52%
Information security analysts49%
Computer user support specialists47%

As we noted when AI code output was growing at the pace of 40,000 developers, the squeeze on programming is real and accelerating. That article tracked Claude Code's share of GitHub commits reaching 4% by early March. Anthropic's labor paper adds the occupational dimension - computer programmers don't just face competition from AI tools, they have the highest observed task coverage of any profession in the study.

A Demographic Twist

The profile of the most exposed workers runs against the usual narrative about automation hitting low-wage, low-skill jobs first. Workers in high-exposure roles are 16 percentage points more likely to be female, earn 47% more on average, and are four times as likely to hold graduate degrees compared to workers in unexposed roles. Nearly twice as likely to be Asian.

This isn't a story about robots in the warehouse. It's about Claude writing financial analyses and drafting legal documents.

What the Data Shows on Employment

For now, the economic impact is limited. The researchers found no statistically significant change in unemployment rates for workers in highly exposed occupations since ChatGPT's launch in late 2022. That's a three-year window with no detectable spike.

But there's one finding that doesn't fit the "no impact so far" narrative.

Workers aged 22 to 25 entering AI-exposed occupations saw their job-finding rates fall roughly 14% compared to the pre-ChatGPT baseline. The researchers describe this as "just barely statistically significant" and label it "suggestive evidence" rather than proof. Still, it aligns with anecdotal reports of entry-level hiring slowdowns in software and finance.

A young job seeker at a job interview Young workers entering AI-exposed fields are already seeing fewer openings, even as aggregate unemployment data remains stable. Source: pexels.com

The BLS projection data provides indirect corroboration. The researchers found that every 10-percentage-point increase in observed exposure corresponds to a 0.6-percentage-point drop in the Bureau of Labor Statistics' own 10-year job growth forecast for that occupation. The BLS produces those forecasts independently, without using AI exposure measures. The fact that they align suggests observed exposure is tracking something real.

This echoes what we covered in the 55,000 AI-attributed layoffs story: the effect is real but diffuse, harder to see in aggregate than in the entry-level hiring pipeline. The youngest workers - who haven't yet built up the institutional knowledge or relationships that make experienced employees harder to replace - are absorbing the first impact.

What It Does Not Tell You

The paper is careful about its limits, and that carefulness is worth reading past the headline numbers.

Observed exposure measures Claude usage, not AI usage broadly. OpenAI, Google, and others contribute to professional AI adoption too, and none of their data is in this study. The true coverage figures are probably higher than Anthropic's own numbers suggest. The authors acknowledge this but can't fix it.

The methodology also can't distinguish between AI augmenting a job and AI replacing it. A financial analyst who uses Claude to process data faster is "exposed" in the same sense as an analyst whose firm has replaced an entire entry-level team with Claude agents. Those are very different economic situations, and the metric conflates them.

Then there's the question of causality with the young-worker finding. A 14% drop in job-finding rates could reflect AI reducing entry-level demand. It could also reflect employers using AI as cover for hiring freezes they'd planned for other reasons - exactly the concern we raised in the 55,000 layoffs story.

"The track record of past approaches gives reason for humility."
- Massenkoff and McCrory, in the paper itself

That self-deprecating caveat comes from the researchers, not critics. They're aware they're building one instrument in what will eventually need to be an entire toolkit for understanding AI's economic effects.

There's also a notable tension with public statements from Anthropic's own CEO. Dario Amodei predicted in May 2025 that AI could eliminate half of all entry-level white-collar jobs within five years. The company's own economists, working with actual usage data, find no aggregate unemployment signal three years into the LLM era. As The Register observed, those two positions don't quite add up. The researchers don't address this gap directly.


Where this paper sits on the larger evidence base: Stripe's AI agents are now shipping 1,300 pull requests a week with zero human-written code, and Claude Code was authoring 4% of GitHub commits as of early March. That's a lot of code being produced without a proportionate increase in programmer headcount. Massenkoff and McCrory's tool can now watch where those pressures first show up in hiring data. The answer, so far, is at the bottom of the org chart.

Sources:

Anthropic Tracks AI Job Risk - Young Workers Feel It First
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.