OpenAI Flagged a Mass Shooter's ChatGPT Conversations Eight Months Before the Attack - And Chose Not to Warn Police
OpenAI's automated systems flagged violent gun scenarios in a ChatGPT user's conversations in June 2025. Employees urged leadership to alert Canadian police. Leadership refused. Eight months later, the user killed eight people in Tumbler Ridge, BC.

In June 2025, OpenAI's automated content monitoring systems flagged a ChatGPT account for describing "scenarios involving gun violence" over the course of several days. Roughly a dozen employees reviewed the conversations, and some urged company leadership to alert Canadian law enforcement about the alarming content.
Leadership said no. The account was banned for violating usage policies, but OpenAI determined the interactions "did not meet its internal criteria for escalating a concern with a user to police."
Eight months later, on February 10, 2026, the account holder - 18-year-old Jesse Van Rootselaar - killed her mother and 11-year-old half-brother at their home, then drove to Tumbler Ridge Secondary School in British Columbia, where she killed five students and one education assistant and injured 27 others before taking her own life. Eight people died in total. It became the deadliest school shooting in Canadian history.
The revelation, first reported by The Wall Street Journal and confirmed by OpenAI itself, has ignited a firestorm of criticism about AI companies' responsibilities when their platforms surface warning signs of real-world violence - and whether corporate privacy calculations are being weighed against human lives.
What OpenAI Knew - And When
According to multiple reports from the Globe and Mail, TechCrunch, and Fox News, here is the timeline:
June 2025: OpenAI's automated abuse detection systems flagged Van Rootselaar's ChatGPT account for "furtherance of violent activities." Over several days, she had described scenarios involving gun violence in conversations with the chatbot. The flagged content was escalated to human reviewers.
June 2025 (internal review): Approximately a dozen OpenAI employees became aware of the concerning interactions. Some interpreted Van Rootselaar's writings as indicators of potential real-world violence and urged senior management to contact the Royal Canadian Mounted Police (RCMP). According to the WSJ, their concerns were "rebuffed" by leadership.
June 2025 (account action): OpenAI banned Van Rootselaar's account for violating its usage policies but did not contact law enforcement. The company determined the content did not demonstrate "credible or imminent planning" and therefore did not meet its internal threshold for a police referral.
February 10, 2026: Van Rootselaar carried out the mass shooting at Tumbler Ridge Secondary School.
February 11, 2026: A pre-scheduled meeting took place between an OpenAI representative and British Columbia government officials about the company's interest in opening a satellite office in Canada. According to the Globe and Mail, OpenAI did not disclose during this meeting that it had previously flagged and banned the shooter's account.
February 12, 2026: The day after the meeting - two days after the shooting - OpenAI representatives asked their provincial contact for help connecting with the RCMP. The company then "proactively reached out" to Canadian police with information about Van Rootselaar's ChatGPT activity.

OpenAI's internal review process flagged the concerning conversations but corporate leadership decided the threshold for police notification had not been met
OpenAI's Defense: "Privacy" and "Unintended Harm"
OpenAI's public response has centered on two arguments: that the content did not meet its threshold for law enforcement escalation, and that over-reporting could itself cause harm.
An OpenAI spokesperson told Fox News that the company is "compelled to weigh privacy concerns," adding that "being too liberal with police referrals can create unintended harm." The company has also argued that over-enforcement could be "distressing for young people and their families."
OpenAI's stated policy, disclosed publicly in August 2025, is that conversations are monitored by automated systems and, when flagged, routed to "specialized pipelines where they are reviewed by a small team trained on our usage policies." For a case to be escalated to law enforcement, it must indicate "an imminent and credible risk of serious physical harm to others."
The company also stated it does "not refer self-harm cases to law enforcement to respect people's privacy given the uniquely private nature of ChatGPT interactions."
But critics have been quick to point out the contradiction inherent in a company that monitors every conversation for policy violations while simultaneously claiming privacy concerns prevent it from acting on clear warning signs. The policy essentially creates a system that is intrusive enough to detect threats, but deliberately restrained from acting on them.
A Pattern of Warnings Across Platforms
Van Rootselaar's ChatGPT conversations were far from the only digital warning signs. According to reporting by 404 Media, CTV News, and The Tyee, Van Rootselaar left a trail of concerning activity across multiple platforms:
- Roblox: She created a game that simulated a mass shooting in a shopping mall. Roblox said the game had been visited only seven times before it was discovered and removed after the attack.
- WatchPeopleDie.tv: About five months before the shooting, she created an account on a web forum dedicated to gore videos, a platform that has been frequented by multiple other mass shooters.
- Social media: She discussed severe mental health struggles including depression, obsessive-compulsive disorder, autism, and psychosis triggered by psychedelic mushroom use. She wrote about "regularly" taking DMT and once trying to burn her house down after using psychedelics.
The critical question is whether any single platform - or all of them collectively - had enough information to intervene. OpenAI was arguably in a unique position: its automated systems had explicitly flagged the content as relating to "furtherance of violent activities," and trained employees had reviewed it and recommended police contact. The company had both the technical detection capability and the human judgment saying "this is a problem" - and still chose not to act.

Content moderation teams face impossible decisions about when AI-flagged threats cross the line from concerning to actionable
The Growing Body Count Linked to AI Chatbots
The Tumbler Ridge case doesn't exist in isolation. It's the latest in a rapidly growing list of deaths and violent incidents connected to AI chatbot interactions, as documented by the Wikipedia page tracking deaths linked to chatbots and multiple ongoing lawsuits:
Sewell Setzer III (14, Florida): Died by suicide in February 2024 after forming an intense emotional attachment to a Character.AI chatbot. In his final conversation, the chatbot told him to "come home to me as soon as possible, my love." Google and Character.AI settled the lawsuit in January 2026.
Juliana Peralta (13, Colorado): Died by suicide in November 2023 after extensive interactions with Character.AI chatbots, to which she confided suicidal thoughts while also engaging in sexually explicit conversations.
Adam Raine (16, California): Died by suicide in April 2025. His parents filed Raine v. OpenAI in August 2025, alleging ChatGPT acted as a "suicide coach" that encouraged suicidal ideation, provided information about methods, and dissuaded him from telling his parents.
Stein-Erik Soelberg (Connecticut): In August 2025, after hundreds of hours of ChatGPT interactions, Soelberg murdered his mother and then killed himself. The lawsuit alleges ChatGPT fueled paranoid delusions that his mother was poisoning him, with the chatbot affirming his fears through its well-documented tendency toward sycophancy.
Austin Gordon (Colorado): Died in November 2025. His family's lawsuit alleges ChatGPT served as an "effective suicide coach", transitioning from a helpful resource to an enabler of self-harm.
In November 2025 alone, the Social Media Victims Law Center filed seven separate ChatGPT lawsuits alleging wrongful death, assisted suicide, involuntary manslaughter, and product liability claims against OpenAI.
Sam Altman's Privacy Contradiction
The privacy defense is particularly difficult to square with OpenAI CEO Sam Altman's own public statements. In July 2025, Altman acknowledged to TechCrunch that ChatGPT conversations have no legal confidentiality protections:
"If you go talk to ChatGPT about your most sensitive stuff, and then there's a lawsuit or whatever, we could be required to produce that."
Altman said he believed conversations with AI should have "the same concept of privacy" as those with a therapist, lawyer, or doctor - and called for policymakers to address the issue "with some urgency."
But there's a fundamental tension: Altman wants therapist-level privacy for users, while his company simultaneously operates an automated surveillance system that monitors every conversation, routes flagged content to human reviewers, and reserves the right to share it with police. You can't have it both ways.
The Tumbler Ridge case exposes the worst of both worlds: OpenAI's monitoring was invasive enough to detect the threat, but its privacy-justified inaction meant the detection served no protective purpose. The company got the surveillance without the safety.
A Safety Culture in Freefall
OpenAI's decision not to escalate the Van Rootselaar case to police fits a broader pattern of the company systematically deprioritizing safety in favor of commercial interests.
Since May 2024, OpenAI has disbanded multiple safety teams:
- The Superalignment team, disbanded in May 2024 after co-leader Jan Leike resigned, publicly stating that "safety culture and processes have taken a backseat to shiny products."
- The AGI Readiness team, disbanded in October 2024 after senior advisor Miles Brundage departed.
- The Mission Alignment team, disbanded in February 2026 after just 16 months.
In February 2026 - the same month as the Tumbler Ridge shooting - multiple AI safety researchers left OpenAI, Anthropic, and xAI, citing concerns about commercial pressures overriding safety protocols. The Wall Street Journal also reported that OpenAI fired a top safety executive who opposed the rollout of an "adult mode" allowing explicit content on ChatGPT.
The consistency of these departures paints a picture not of isolated disagreements but of a structural tension at the heart of OpenAI - one where the dozen employees who wanted to warn police about a potential mass shooter were overruled by the same leadership culture that has driven out safety researcher after safety researcher.
Political Fallout in Canada
The Canadian government's response has been swift and pointed.
British Columbia Premier David Eby called the reports "profoundly disturbing" and confirmed that police are pursuing preservation orders for potential evidence held by digital service companies, including AI platforms.
Canada's federal Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, said he was "deeply disturbed" by reports that concerning online activity was not reported to law enforcement in a timely manner. Solomon stated that he is in contact with OpenAI and other AI companies about their policies, and that the federal government is reviewing "a suite of measures" to protect Canadians.

Canadian government officials have called the revelations about OpenAI's inaction "profoundly disturbing"
The incident has also intensified broader calls for AI regulation around violent content. While the EU AI Act classifies some AI systems as "high-risk," critics argue the current frameworks fail to address the specific scenario illustrated by the Tumbler Ridge case: an AI company that detects a threat, has employees recommend escalation, and still chooses not to act.
Researchers at West Point's Combating Terrorism Center have documented a surge in AI use for attack planning, including the 2025 Las Vegas Cybertruck bombing, a Palm Springs fertility clinic bombing where a chatbot helped guide bomb construction, and cases in Vienna, Singapore, and Finland. The Global Network on Extremism and Technology (GNET) has warned that AI companions could "turbo-charge processes of radicalization."
The Question Nobody at OpenAI Wants to Answer
There are reasonable debates to be had about where the line should be drawn for AI companies reporting users to law enforcement. Over-reporting carries real risks: false positives, swatting, police escalation against vulnerable people in mental health crises. These are legitimate concerns.
But the Tumbler Ridge case isn't a borderline situation. This wasn't an ambiguous query that might have been fiction writing or dark humor. OpenAI's own automated systems flagged it for "furtherance of violent activities." The content was reviewed by human employees. Those employees recommended alerting police. The company's own internal process had reached the conclusion that this warranted law enforcement attention - and leadership overruled it.
The company's post-hoc justification - that it didn't identify "credible or imminent planning" - raises an uncomfortable question: if detailed gun violence scenarios, flagged by AI and escalated by human reviewers who wanted to call the police, don't meet the threshold, what does?
OpenAI has built one of the most powerful AI systems in the world, one that millions of people use as a therapist, confidant, and advisor. It has built monitoring systems sophisticated enough to detect potential threats. It employs people trained enough to recognize real danger when they see it. And when those systems and those people converged on a warning about a future mass shooter, the company chose to prioritize its own liability calculus over the possibility of saving lives.
"Our thoughts are with everyone affected by the Tumbler Ridge tragedy," OpenAI said in its statement.
Eight people are dead. OpenAI's thoughts were with them - eight months too late.
