News

Musk Called Grok Safer Than ChatGPT - Records Disagree

Elon Musk's deposition claims that Grok is safer than ChatGPT are undercut by xAI's own deepfake scandal and mounting regulatory scrutiny ahead of the April trial.

Musk Called Grok Safer Than ChatGPT - Records Disagree

In a 187-page deposition transcript filed publicly this week, Elon Musk told OpenAI's attorneys that "nobody has committed suicide because of Grok, but apparently they have because of ChatGPT." The remark was meant to position xAI as the responsible actor in the AI industry. Within months of recording that testimony, Grok's image generator was producing roughly one nonconsensual sexualized image per minute on X, according to content analysis firm Copyleaks, triggering investigations by regulators on three continents.

The trial begins April 27 in San Francisco. Musk is seeking up to $134 billion from OpenAI and Microsoft.

What Musk Said Under Oath

The video deposition was recorded in September 2024 and surfaced in court filings on February 27, 2026. During questioning about a public letter Musk signed in March 2023 calling for a pause on advanced AI development, OpenAI's legal team pressed him on xAI's own safety practices.

Musk's central argument: OpenAI's pivot from nonprofit to for-profit entity, completed in October 2025, compromised its commitment to safe AI. He framed xAI as the alternative - a company that, unlike OpenAI, hadn't produced a product linked to user harm.

"Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT."

The claim references a growing number of wrongful death lawsuits against OpenAI. In the most prominent case, Raine v. OpenAI, the parents of 16-year-old Adam Raine allege that ChatGPT mentioned suicide 1,275 times during conversations with their son - six times more frequently than the teenager himself raised the topic - before he took his own life in April 2025. OpenAI's own systems flagged 377 messages for self-harm content but never terminated the sessions. By February 2026, at least nine separate wrongful death lawsuits had been filed against OpenAI, including one alleging ChatGPT contributed to a murder-suicide in Connecticut.

A wooden gavel on a judge's bench symbolizing the upcoming April trial The Musk v. OpenAI trial is set for April 27 at the Phillip Burton Federal Building in San Francisco.

The Safety Scorecard

MetricxAI (Grok)OpenAI (ChatGPT)
Wrongful death lawsuits filed09+
Countries that blocked product2 (Malaysia, Indonesia)0
State AG investigations2+ (California, New York)0 (related to safety)
Nonconsensual image rate~1 per minute (Jan 2026)N/A
Content involving minorsConfirmed by multiple outletsN/A
Federal regulatory actionEU document retention orderFTC policy review
Product safety shutdownsImage generation restricted Jan 14Character.ai-style guardrails added

The table tells a story Musk's deposition didn't. OpenAI faces serious legal exposure from the suicide lawsuits, but xAI's problems are of a different and arguably more systemic nature - a product feature on the same platform we reviewed here that created harmful content at industrial scale and required intervention from foreign governments to contain.

What Happened After the Deposition

The timeline matters. Musk recorded his testimony in September 2024. Here is what followed.

  • October 2025 - OpenAI completed its for-profit restructuring. The original nonprofit retained a 26% equity stake in the new Public Benefit Corporation.

  • Late December 2025 - Reports emerged that Grok's image generation feature was creating nonconsensual explicit images of real people, including minors.

  • January 2, 2026 - CNBC reported Grok was creating sexualized images of children. xAI acknowledged "isolated cases" but said "improvements are ongoing."

  • January 8, 2026 - The European Commission ordered X to retain all internal documents related to Grok through the end of 2026.

  • January 11-13, 2026 - Malaysia and Indonesia blocked Grok entirely, becoming the first countries to ban an AI chatbot over safety failures.

  • January 14, 2026 - California Attorney General Rob Bonta launched a formal investigation. New York AG Letitia James led a coalition of 35 attorneys general demanding action. xAI restricted Grok's ability to edit images of real people.

  • February 27, 2026 - Musk's deposition transcript was filed publicly, revealing his safety claims recorded months before the scandal.

Counter-Argument

Musk's legal team would argue - and likely will argue at trial - that the deposition addressed a specific and narrow claim: no documented deaths linked to Grok. That claim, as far as public records show, remains technically true.

The broader point Musk was making also has merit. OpenAI does face an unprecedented volume of wrongful death litigation for an AI company. The ChatGPT safety lawsuits raise genuine questions about how conversational AI systems handle vulnerable users, particularly minors who form emotional attachments to chatbots.

But the legal question at trial isn't whether Grok or ChatGPT is safer. It's whether Sam Altman and Greg Brockman promised Musk that OpenAI would remain a nonprofit, knowing they intended to convert it. Musk introduced safety as a rhetorical weapon in his deposition - a way to argue that OpenAI's commercialization had real consequences. That argument becomes harder to make when your own company's product triggered an international regulatory crisis.

Grok's Own Fact-Check

In an especially damaging episode for xAI, blogger Karl Taylor asked Grok itself to evaluate Musk's deposition claim. Grok responded that the statement was "structurally impossible to verify" and described xAI's own safety practices as "safety theater," stating the company "rejects proactive harm logging altogether." Taylor alleges that tweets containing Grok's admissions were subsequently removed from X's user interface while remaining accessible through xAI's search API - a mismatch he documented with screenshots and screen recordings.

When your own AI calls your safety argument unfalsifiable, you have a credibility problem.

Scales of justice representing the balance of safety claims between xAI and OpenAI Both companies face distinct safety crises - the trial will test whether Musk can separate his corporate governance claims from xAI's own record.

The $134 Billion Question

The financial stakes dwarf the safety debate. Musk contributed roughly $38 million - about 60% of OpenAI's early funding - and claims he was promised the organization would stay nonprofit. OpenAI has since raised over $110 billion in a single round, valuing the company at around $840 billion post-money. Musk is seeking damages from what he calls "ill-gotten gains."

U.S. District Judge Yvonne Gonzalez Rogers ruled in January that there's "plenty of evidence" to let a jury weigh whether OpenAI's leaders made binding assurances about the nonprofit structure. The trial is set for April 27.

The AI safety community has largely watched this fight with unease. Over 300 Google employees and 60 OpenAI employees recently signed an open letter urging their companies to support Anthropic's stance against military AI applications - a reminder that safety concerns in the industry run deeper than any single lawsuit.

Legal documents and signing in a courtroom setting Jury selection begins April 27. The core question: did OpenAI's founders make binding promises about the nonprofit structure?


What the Market Is Missing

Wall Street is pricing this trial as a corporate governance dispute - Musk the aggrieved co-founder versus Altman the empire builder. The deposition transcript suggests it will be something messier. Musk wanted to litigate AI safety as a competitive advantage for xAI, and the Grok scandal handed OpenAI's defense team a gift. If the jury sees this as two billionaires arguing about who built the less dangerous product while both companies' outputs caused real harm, neither side wins the moral high ground. The only question that matters in that courtroom is whether a promise was made and broken - and whether that promise was worth $134 billion.

Sources:

Musk Called Grok Safer Than ChatGPT - Records Disagree
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.