Oregon Passes First 2026 Chatbot Safety Bill for Minors

Oregon's SB 1546 requires chatbot operators to implement suicide safeguards, disclose AI nature to minors, and ban engagement-maximizing rewards for kids. The 28-2 Senate vote makes it the first chatbot safety bill to pass in 2026.

Oregon Passes First 2026 Chatbot Safety Bill for Minors

Oregon gave final approval to SB 1546 on March 5 with a 28-2 Senate vote, making it the first state to pass a chatbot safety bill in 2026. The law requires operators of AI chatbot systems to implement specific protections for minors, including suicide and self-harm safeguards, mandatory disclosure of AI nature, restrictions on age-inappropriate content generation, a ban on engagement-maximizing reward mechanisms for children, and mandated break reminders.

TL;DR

  • Oregon Senate passed SB 1546 28-2 - first chatbot safety bill of 2026
  • Requires: suicide/self-harm safeguards, AI disclosure to minors, age-appropriate content filtering
  • Bans engagement-maximizing rewards (streaks, points, badges) for users under 18
  • Applies to ChatGPT, Claude, Grok, Gemini, and any chatbot accessible to minors
  • Similar bills advancing in Washington and Utah
  • Follows a string of lawsuits and incidents involving minors and AI chatbots

What the Bill Requires

SB 1546 imposes five categories of obligations on companies operating AI chatbot systems accessible to minors in Oregon:

RequirementDetails
Suicide/self-harm safeguardsMust detect and intervene when conversations involve suicide, self-harm, or crisis language. Must provide crisis resources (988 Lifeline, Crisis Text Line).
AI disclosureMust clearly inform users under 18 that they're interacting with an AI system, not a human
Age-inappropriate contentMust prevent generation of sexual, violent, or otherwise age-inappropriate content for minor users
Engagement rewards banCan't use streaks, points, badges, or other gamification mechanics designed to maximize engagement for users under 18
Break remindersMust prompt minor users to take breaks after extended interaction periods

Who It Affects

The bill applies to any company operating a chatbot system accessible to users in Oregon. This includes OpenAI (ChatGPT), Anthropic (Claude), xAI (Grok), Google (Gemini), Meta (Meta AI), and any smaller chatbot operator. The law doesn't exempt companies based on size or revenue.

The practical enforcement question is how companies verify age. The bill requires safeguards for "minor users" but doesn't mandate a specific age verification mechanism - a gap that similar legislation in other domains (social media age restrictions) has struggled to fill.

The Legislative Context

Oregon's bill follows a series of high-profile incidents involving minors and AI chatbots. Multiple lawsuits filed in 2025 alleged that chatbot interactions contributed to mental health crises in teenagers. The Character.AI lawsuit in October 2024, involving a 14-year-old's death, accelerated legislative action across several states.

SB 1546 is the first to pass in 2026, but it won't be the last. Washington state has a similar bill in committee, and Utah's legislature is considering parallel requirements. The Transparency Coalition, which tracks AI legislation, identified 15 states with active chatbot safety bills as of March 2026.

At the federal level, the FTC is due to publish its AI policy statement by March 11 - a deadline set by Trump's December 2025 executive order. That statement could either reinforce or preempt state-level chatbot regulations depending on how broadly the FTC interprets its authority.


The 28-2 vote margin suggests broad bipartisan support for chatbot safety measures when focused on child protection. Whether the requirements are technically feasible without solving the age verification problem - and whether a patchwork of state laws creates compliance chaos for chatbot operators - are questions that will play out as more states follow Oregon's lead.

Sources:

Oregon Passes First 2026 Chatbot Safety Bill for Minors
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.