News

The Human Cost of the AI Race - Burnout, Breakdowns, and an Industry in Denial

As safety researchers flee frontier AI labs citing existential dread, a darker pattern emerges: the people building the most powerful technology in history are burning out, breaking down, and self-medicating to keep up.

The Human Cost of the AI Race - Burnout, Breakdowns, and an Industry in Denial

On February 10, OpenAI engineer Hieu Pham posted four sentences to X that cut through the usual AI discourse: "Today, I finally feel the existential threat that AI is posing. When AI becomes overly good and disrupts everything, what will be left for humans to do? And it's when, not if."

Pham isn't a doomer or an outsider. He holds a PhD from Carnegie Mellon, co-authored ENAS (the paper that made neural architecture search practical), worked at Google Brain for five years, did a stint at xAI, and joined OpenAI seven months ago to write GPU kernels. This is someone deep in the machinery. And he's scared.

TL;DR

  • OpenAI engineer Hieu Pham's public existential crisis is part of a wave - at least 6 senior safety and policy staff have left OpenAI, Anthropic, and xAI in February 2026 alone
  • A 2024 Quantum Workplace survey found employees who frequently use AI tools experience 45% higher burnout rates than those who don't
  • Surveys estimate 15-20% of Silicon Valley developers regularly use stimulants for work; OpenAI whistleblower Suchir Balaji had amphetamine in his system when he died
  • OpenAI has now disbanded two safety teams in under two years - Superalignment (2024) and Mission Alignment (2026)

The Departures

Pham's tweet landed days after Anthropic's Safeguards Research lead Mrinank Sharma resigned, warning "the world is in peril." In the same week, OpenAI researcher Zoe Hitzig quit and published a New York Times essay titled "OpenAI Is Making the Mistakes Facebook Made. I Quit." OpenAI VP of product policy Ryan Beiermeister was fired after opposing the company's "adult mode" for ChatGPT. And OpenAI quietly disbanded its Mission Alignment team - the seven-person group created in 2024 to ensure AGI development stayed true to the company's founding mission.

EventDateLabPerson
Mrinank Sharma resigns, posts letter on XFeb 9AnthropicSafeguards Research lead
Hieu Pham posts existential warning on XFeb 10OpenAIMember of Technical Staff
Zoe Hitzig quits, publishes NYT essayFeb 11OpenAIResearcher (2 years)
Ryan Beiermeister fired after opposing adult modeJan (reported Feb)OpenAIVP of Product Policy
Mission Alignment team disbandedFeb 11OpenAI7-person team dissolved

This isn't new territory for OpenAI. In May 2024, co-founder Ilya Sutskever and Superalignment team lead Jan Leike both left. Leike wrote that "safety culture and processes have taken a backseat to shiny products." At least five more safety-focused employees departed in the months that followed - Daniel Kokotajlo, Leopold Aschenbrenner, Miles Brundage, William Saunders, and others. Kokotajlo said he quit after "losing confidence that OpenAI will behave responsibly."

Two safety teams disbanded in under two years. The pattern is hard to misread.

"Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most." - Mrinank Sharma, resignation letter

What the Numbers Say

The Burnout Data

The departures get the headlines. The burnout doesn't.

A 2024 Quantum Workplace survey found that employees who frequently use AI tools experience 45% higher burnout rates than non-users. A separate industry survey from the same year found 68% of tech workers reported burnout symptoms - up from 49% three years earlier. UC Berkeley and Yale researchers published findings showing AI tools don't reduce work - they intensify it, creating "workload creep" where every hour AI frees up gets filled with more tasks.

This hits frontier AI labs harder than most. Glassdoor and Blind reviews for OpenAI consistently rate work-life balance as the lowest category, around 3.0 out of 5. Reviewers describe "an insane culture of overwork" where "it is normal to see senior people working late into the evening and on weekends." Some report colleagues "mysteriously disappearing from Slack months after joining." Anthropic fares slightly better at 3.6 out of 5, but employees still report 60-plus hour weeks during peak periods.

The Stimulant Problem

The overwork culture runs on more than coffee. Surveys estimate 15-20% of software developers in Silicon Valley regularly use mind-altering substances for work. Adderall and modafinil are the drugs of choice - prescription stimulants repurposed as cognitive enhancers. A University of Michigan study found that some companies' pharmacies waive copays for prescription stimulants, and many workplaces operate under "don't ask, don't tell" policies.

The research on effectiveness isn't encouraging. Studies suggest that lower-performing individuals see modest gains on stimulants, while higher performers show no improvement or actually get worse. The health risks - cardiovascular strain, anxiety, insomnia, mood swings, paranoia - are well documented.

The most sobering data point: Suchir Balaji, the OpenAI whistleblower who accused the company of violating copyright law, was found dead in his apartment in November 2024 at age 26. The San Francisco Medical Examiner ruled it a suicide. The toxicology report found both alcohol and amphetamine in his system. Balaji's father said his son experienced "fear and anxiousness" after blowing the whistle. His mother said he started antidepressants after leaving the company but wasn't seeing a therapist.

Balaji's case is extreme and specific - his distress was tied to whistleblowing, not just overwork. But it sits on a spectrum. In India, a Rest of World analysis found 227 reported cases of suicide among tech workers between 2017 and 2025. At Microsoft, a 35-year-old engineer was found dead at the Mountain View campus in August 2025; his family pointed to overwork. An AI ethicist advising Fortune 500 companies described "immense decision fatigue" across the industry and a widespread "freeze" response among leadership overwhelmed by the pace of change.

What Nobody Measures

Here's the gap: there is no systematic tracking of mental health outcomes at frontier AI labs. OpenAI, Anthropic, Google DeepMind, xAI - none publish internal wellness data. There are no industry-wide surveys of researcher burnout, stimulant use, or psychological distress specific to the people building the most powerful AI systems. The data we have comes from anonymous Glassdoor reviews, scattered journalism, and the occasional whistleblower.

We track benchmark scores to three decimal places. We track funding rounds to the dollar. We don't track what the race is doing to the people running it.

Should You Care?

The practical answer: yes, because burned-out safety researchers make worse safety decisions.

When Anthropic's CEO Dario Amodei predicts "50% of entry-level white-collar jobs eliminated within 1-5 years," and when AI task completion times double every 4-7 months according to METR, the pressure on the people inside these labs only deepens. The teams responsible for AI safety guardrails are the same teams operating under crunch conditions with no defined finish line. There is no "ship date" after which things calm down. The race is open-ended.

As we've covered in our AI safety exodus reporting, the ethical concerns driving departures are real and serious. But the human toll is a separate problem that compounds the first one. You can't build reliable safety infrastructure with a workforce running on stimulants and existential dread.

Hieu Pham's four-sentence post wasn't a policy argument. It was a person, deep inside the machine, saying the quiet thing out loud. The question isn't whether more people at these labs feel the same way. It's how many of them will be left to say it.

Sources

The Human Cost of the AI Race - Burnout, Breakdowns, and an Industry in Denial
About the author AI Infrastructure & Open Source Reporter

Sophie is a journalist and former systems engineer who covers AI infrastructure, open-source models, and the developer tooling ecosystem.