New Yorker Casts Doubt on Sam Altman's Integrity

Ronan Farrow and Andrew Marantz spent 18 months investigating OpenAI's CEO. What they found is hard to dismiss.

New Yorker Casts Doubt on Sam Altman's Integrity

"Sam Altman May Control Our Future - Can He Be Trusted?"

  • The New Yorker, April 6, 2026

What They Said / What We Found

  • 18 months of reporting, 100+ interviews, 200+ pages of documents lay out a pattern of alleged deception by OpenAI's CEO
  • Key accusers include Ilya Sutskever and Dario Amodei - two people who built OpenAI and then left it
  • OpenAI calls it "selective anecdotes from people with clear agendas" - accurate, but not a refutation of the specific claims
  • Altman's own blog response confirmed at least one dramatic detail: his house was attacked with a Molotov cocktail at 3:45 am

Ronan Farrow made his name exposing Harvey Weinstein. Andrew Marantz spent years documenting how the internet radicalized the American right. When those two bylines appear on 18 months of reporting about the most powerful figure in artificial intelligence, you don't skim the abstract.

Their investigation, published in The New Yorker on April 6, draws on over 100 interviews and more than 200 pages of internal documents. The central claim isn't complicated: Sam Altman, the CEO steering OpenAI toward what it describes as artificial general intelligence and beyond, can't be trusted to do so honestly.

The Claim

The New Yorker's argument rests on sources that most critics of Altman can't match. Ilya Sutskever, who co-founded OpenAI and spent a decade inside it, compiled roughly 70 pages of Slack messages and HR records before the November 2023 board crisis. Those pages, shared with Farrow and Marantz, describe what Sutskever called "a consistent pattern of lying." Dario Amodei, who also co-founded OpenAI before leaving to start Anthropic, kept private notes that included the blunt assessment: "The problem with OpenAI is Sam himself."

Paul Graham, who ran Y Combinator when Altman led it, told the reporters that Altman had "been lying to us all the time" before his removal from YC in 2019.

These aren't anonymous commentators. They are the people who gave Altman his most important early platforms, and they're on record.

The Evidence

The Safety Numbers Don't Add Up

The most concrete allegation is about OpenAI's superalignment effort, announced in 2023 with promises of over $1 billion in compute dedicated to AI safety research. According to four team members who spoke to Farrow and Marantz, actual resources allocated amounted to "one to two per cent" of what Altman had pledged publicly.

That context reframes the AI safety exodus we covered in February. The researchers who left OpenAI's safety teams weren't departing over salary disputes. They were leaving because the resources - and the commitment those resources represented - never materialized.

The Board Received False Information

Before OpenAI released GPT-4, Altman told board members that certain features had cleared the company's safety review process. They hadn't. No safety panel had approved them. The board members voting on the company's direction were working from inaccurate information.

This matters more than it might appear. OpenAI's board exists specifically to hold the line on safety when commercial pressure pushes the other direction. If the CEO provides false assurances about safety approvals, the board's ability to function as any kind of check is hollow.

Sam Altman speaking at the World Economic Forum Annual Meeting in Davos, January 2024 Sam Altman at the World Economic Forum in Davos, January 2024. Source: flickr.com (World Economic Forum, CC BY-NC-SA 2.0)

The Safety Charter Incident

The investigation includes a scene from 2023 that hadn't been reported in this detail before. When Microsoft's legal team blocked a specific merger clause, Altman denied to colleagues that the provision existed - until Dario Amodei read it aloud from the contract. The clause was there. Altman was in the room.

Personal Financial Conduct

Investors described what the investigation calls a "Sam first" policy: selective personal investment access that benefited Altman while blocking outside investors from equivalent opportunities. The investigation also documents Altman pursuing close ties with UAE leadership and Saudi funding even after the 2018 murder of journalist Jamal Khashoggi, reportedly over explicit objections from Biden administration officials.

The Pentagon Sequence

The timing of OpenAI's Pentagon partnership is documented in the investigation with new detail. Anthropic was blacklisted from the contract after declining to drop military use restrictions. OpenAI then secured the partnership. According to the report, that deal increased OpenAI's valuation by roughly $110 billion. Farrow and Marantz document the sequence; they don't claim to explain it.

Claim vs Reality

What Altman Said or ImpliedWhat the Investigation Found
Superalignment would receive $1B+ in computeFour team members: actual allocation was 1-2% of the pledge
GPT-4 features had safety approvalNo safety panel had approved them before the board vote
His YC departure was voluntaryPaul Graham: "had been lying to us all the time"
The merger clause didn't existAmodei read the clause aloud from the contract in the meeting

Ronan Farrow, journalist and co-author of the New Yorker investigation Ronan Farrow co-authored the investigation with Andrew Marantz. It took 18 months. Source: flickr.com (VascoPress Comunicações)

What They Left Out

OpenAI's response deserves a fair read. The company said the piece "revisits previously reported events through anonymous claims and selective anecdotes sourced from people with clear agendas." That's accurate on its face: Sutskever and Amodei both left to build competing AI companies. Graham's relationship with Altman has been complicated for years. These are people with reasons to speak critically.

Altman responded in a personal blog post. He opened it by sharing a photo with his family and confirming the Molotov cocktail attack on his home - an incident he wanted public, he wrote, to discourage future violence. He acknowledged that "the fear and anxiety about AI is justified" and called for a "society-wide response to be resilient to new threats." He didn't address the specific allegations about resource allocation, board communications, or the safety approval claims.

The deflection isn't surprising. Specific denials create specific records. But the absence of specific rebuttals is worth noting, given the strength of the sourcing.


OpenAI is now approaching an estimated $850 billion valuation on the way to an IPO, explicitly pursuing what it calls artificial superintelligence. Farrow's prior investigations - on Weinstein, on NYPD surveillance, on the intelligence community - held up to scrutiny after publication. That track record doesn't make every allegation here accurate. But it is why the sourcing warrants more than a dismissal as "clear agendas." The pattern across two decades and three institutions is unusually consistent. The people funding, regulating, and partnering with OpenAI should be asking pointed questions about what specifically Altman disputes - and why he hasn't said.

Sources:

New Yorker Casts Doubt on Sam Altman's Integrity
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.