Altman Apologizes to Tumbler Ridge - Canada Eyes AI Rules

OpenAI CEO Sam Altman sent an apology to Tumbler Ridge two months after eight people were killed - now Canada is weighing mandatory reporting laws for AI companies.

Altman Apologizes to Tumbler Ridge - Canada Eyes AI Rules

Two months after a mass shooting in Tumbler Ridge, British Columbia killed eight people - including five students between 12 and 13 years old - OpenAI CEO Sam Altman sent an apology letter to the community. The letter, dated April 23 and shared with local news site Tumbler RidgeLines, acknowledged that OpenAI had flagged the shooter's ChatGPT account eight months before the attack and chose not to alert Canadian police.

BC Premier David Eby shared the letter on social media and offered a blunt assessment: "The apology is necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge." Eby's statement frames what comes next. Canada doesn't currently have binding rules that would force AI companies to report credible threats to law enforcement, and the Altman letter has put that gap back on the legislative agenda.

"I am deeply sorry that we did not alert law enforcement to the account that was banned in June. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in the world than losing a child."

  • Sam Altman, OpenAI CEO, April 23, 2026

The Letter Two Months Late

The timeline that produced this apology stretches back to June 2025. OpenAI's automated content monitoring flagged a ChatGPT account for describing "scenarios involving gun violence" across multiple sessions. About a dozen company employees reviewed the conversations. Some pushed leadership to contact Canadian law enforcement. Leadership refused, determining the activity didn't meet OpenAI's internal threshold for reporting - defined as posing "a credible or imminent threat of harm to others."

Jesse Van Rootselaar, 18, killed eight people in Tumbler Ridge on February 10, 2026.

OpenAI had published a statement in the days after the shooting acknowledging it had suspended the account but hadn't told police. Altman committed to writing a formal apology in early March, when he met with Premier Eby and Tumbler Ridge Mayor Darryl Krakowka. He said at the time he wanted to give the community space to grieve before sending the letter. The April 23 date is his.

For a full account of how OpenAI handled the original incident, see our earlier investigation.

Sam Altman at the White House in September 2025, five months before the Tumbler Ridge shooting Sam Altman at the White House in September 2025, months before the Tumbler Ridge shooting put OpenAI's threat-reporting policies under scrutiny. Source: commons.wikimedia.org (White House photo)

Premier Eby's Verdict - and a Lawsuit

Eby's one-line response carries weight for a reason. He met with Altman face-to-face in March and publicly expected this letter. Calling it "grossly insufficient" after receiving it signals that his government isn't treating the apology as closure.

The families aren't, either. In March, the parents of Maya Gebala, one of the student victims, filed a civil lawsuit against OpenAI. The lawsuit argues the company was negligent in its handling of the shooter's account. OpenAI hasn't publicly commented on the suit.

Impact Assessment

StakeholderImpactTimeline
Tumbler Ridge familiesApology received; civil suit against OpenAI ongoingNow
OpenAIReputational pressure; policy changes announcedQ2-Q3 2026
Canadian governmentRegulatory gap exposed; legislation under consideration2026 parliamentary session
AI industry broadlyNew scrutiny of threat-reporting standards across all major labs2026-2027
ChatGPT usersBroader flagging criteria; possible law enforcement referrals for violent contentQ3 2026

Canada's Response

The Regulatory Gap

Canada doesn't have a direct equivalent to Europe's AI Act for mandatory threat reporting. The Artificial Intelligence and Data Act (AIDA), which would have established a framework for regulating high-impact AI systems, died when parliament prorogued and hasn't been reintroduced under the Carney government.

A separate bill that would have extended mandatory reporting requirements to internet services, modeled on the obligations that apply to mental health professionals, also lapsed in the last parliamentary session. After Tumbler Ridge, there's renewed pressure to bring some version of it back.

Canadian officials framed their demands after the February shooting in explicit terms: AI companies should face a "duty of care" requirement that forces them to report credible threats to law enforcement. That framing directly targets the threshold question that OpenAI got wrong in June 2025.

ChatGPT on a smartphone - the app at the center of the Tumbler Ridge investigation The ChatGPT mobile app. OpenAI flagged the shooter's account in June 2025 and suspended it - but internal criteria blocked any referral to police. Source: unsplash.com

What Canada Is Demanding

The specific ask from Canadian officials lines up with an idea that gets floated after most AI safety incidents: treat AI platforms more like professionals with statutory reporting duties. A therapist, teacher, or doctor in Canada who learns of credible plans for violence has a legal obligation to report. An AI company doesn't.

The Carney government's position, per public statements from Canadian officials following the February shooting, is that this should change. The difficulty is that "credible threat" thresholds are hard to codify in statute - OpenAI's own internal standard was supposed to catch exactly this case and didn't. Legislating a vague threshold creates compliance obligations without necessarily changing outcomes.

Connecticut's legislature wrestled with a similar tension in its own AI bill, which passed the state Senate 32-4 in April. The Connecticut measure focuses on employment and consumer-facing chatbots but shows how hard it is to draft AI liability language that holds up to legal scrutiny.

What OpenAI Is Changing

Altman's letter commits to specific actions rather than just expressing regret. OpenAI says it's:

  • Building direct contact protocols with Canadian law enforcement agencies
  • Running pilot programs with select Canadian police departments
  • Partnering with the Canadian Centre to End Human Trafficking on refining threat-detection approaches
  • Launching a transparency dashboard by Q3 2026 that'll show how frequently the company flags violent content and how many cases are referred to authorities

The dashboard is the most concrete commitment, and also the most limited. Publishing numbers about past referrals doesn't bind OpenAI to any specific referral rate from now on, and the criteria for what counts as a "credible threat" remain internal policy, not regulated standard.

BC Premier David Eby at a press conference, March 2025 BC Premier David Eby at a press conference. He called Altman's apology letter "necessary, and yet grossly insufficient" for what happened in Tumbler Ridge. Source: flickr.com/bcgovphotos (CC BY-NC-ND 2.0)

What Happens Next

The apology closes one chapter and opens several others. The civil lawsuit from the Gebala family will proceed regardless of Altman's letter - Canadian courts will eventually rule on whether OpenAI's June 2025 decision forms negligence under existing law.

On the regulatory side, the most likely vehicle is a revised version of the mandatory reporting bill that died when parliament prorogued. If the Carney government moves on it, it'll need to resolve the threshold problem: defining when an AI company is legally required to contact police in terms specific enough to be enforceable. That's difficult, but Tumbler Ridge has given lawmakers a concrete case to point to - one where a company's internal standard said "not enough" eight months before eight people were killed.

Other AI companies are watching. None of the major US labs currently have mandatory government reporting obligations for violent content detected in their systems. If Canada passes legislation, it creates a compliance baseline that'll apply to any company with Canadian users.

OpenAI's transparency dashboard, due in Q3 2026, will be the first time the company has publicly quantified how often it refers user content to law enforcement. Whatever that number is, it'll be the reference point for every regulatory debate that follows.


Sources:

Daniel Okafor
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.