Connecticut Passes AI Bill 32-4 - Employment and Chatbots

Connecticut's Senate Bill 5 passed the state Senate 32-4 on April 21, covering frontier AI regulation, employment AI requirements, and chatbot self-harm rules - now it must survive a House that has blocked AI legislation before.

Connecticut Passes AI Bill 32-4 - Employment and Chatbots

Connecticut's Senate passed a 97-page AI regulation bill 32-4 on April 21, handing the state's AI industry its most sweeping legislative test yet. Senate Bill 5 - titled "An Act Concerning Online Safety" - bundles frontier model oversight, employment AI protections, and mandatory chatbot safety rules into a single omnibus that now heads to a House chamber with a history of killing AI legislation.

The timing is not neutral. The Trump White House pushed Congress in March to preempt all state AI laws with a federal standard, and Connecticut's Senate just moved in exactly the opposite direction. If SB5 becomes law, it becomes one of the most comprehensive state-level AI statutes in the country - sitting with New York's RAISE Act but reaching further into workplace and consumer territory.

TL;DR

  • Senate passed SB5 32-4 on April 21; covers frontier AI, employment AI notifications, and chatbot mental health rules
  • Employers in Connecticut must notify workers when AI is used in hiring or employment decisions, effective October 1, 2026
  • AI chatbots must detect suicidal ideation and route users to crisis resources
  • A state AI "sandbox" lets companies test AI products with regulatory cover
  • Bill now faces a House that declined to act on AI legislation in prior sessions
  • Governor Lamont's office gave "qualified support" - a notable shift from earlier veto threats

What the Bill Actually Does

SB5 spans 40 sections. The core pieces fall into three categories: who builds AI, who deploys it, and what happens to the people on the receiving end.

Employment AI - The Workplace Shift

The provision that affects the most people is the employment AI section. Starting October 1, 2026, any employer in Connecticut using AI to inform hiring, scheduling, or employment decisions must notify employees and applicants. Workers gain the right to appeal decisions they believe were shaped by AI and to request human review.

The bill also bars employers from using AI decision tools for discriminatory purposes. That prohibition applies to the standard protected categories - age, race, sex, disability - and the enforcement mechanism gives workers a private right of action, letting them bring claims directly in Superior Court rather than waiting for an agency investigation.

For unionized workplaces, the implications run deeper. The bill would make AI tools used in "hiring, scheduling, or monitoring" mandatory subjects for collective bargaining in the public sector, as the Yankee Institute noted in its analysis published the day after the vote. Unions could effectively veto AI deployments that change working conditions without negotiation.

Chatbot Safety Rules

Chatbot operators must make "reasonable efforts" to detect users showing signs of suicidal ideation or self-harm and provide crisis resources in response. The language is vague enough to be contested - what counts as "reasonable" will likely depend on how aggressively the state's attorney general pursues enforcement - but the obligation itself is clear.

This clause covers any AI chatbot open to Connecticut users, not just those built by large companies. A startup launching a customer service bot is covered the same as OpenAI or Anthropic.

Frontier AI and the Sandbox

The bill defines "frontier" AI models and assigns obligations to their developers, though the exact thresholds aren't spelled out in the public summaries available. Developers would have to account for "catastrophic risks" in their training and deployment processes.

More unusual is the regulatory sandbox. Companies can apply to test new AI products inside a supervised state framework, receiving temporary relief from standard regulatory requirements during the trial period. It's a tactic borrowed from fintech regulation in the UK and Singapore, and Connecticut would be among the first US states to formalize it for AI.

The Connecticut AI Academy - a new workforce development body - rounds out the bill, with a mandate to train state workers, teachers, and small businesses on AI tools.

Connecticut State Capitol building in Hartford The Connecticut State Capitol in Hartford, where SB5 passed the Senate 32-4 on April 21, 2026. The bill now moves to the House. Source: commons.wikimedia.org

The Path to Governor's Desk

A 32-4 Senate vote looks convincing. The legislative math ahead is much harder.

The House Problem

Connecticut's House has declined to act on AI legislation in recent sessions. The pattern isn't ideological - AI regulation has supporters and skeptics in both parties - but reflects a broader House reluctance to move first on fast-moving technology policy when federal action remains possible.

The Senate minority's objections point to the arguments the House will likely boost. Senate Minority Leader Stephen Harding (R-Brookfield) made the case for federal deference: "I fear crafting legislation in the state of Connecticut with 4 million people, on a technology that is really unknown."

Sen. Rob Sampson (R-Wolcott) took a procedural shot: "I just got this bill... This is yet another strike all amendment" - a reference to the bill being substantially rewritten in final sessions. Sen. Tony Hwang (R-Fairfield) added the economic argument: "We may be creating a roadblock that hampers business success."

Those objections will find a receptive audience in a House where business lobby groups carry weight.

Lamont's Evolving Position

Governor Ned Lamont has historically been hostile to sweeping AI regulation. In 2025, he threatened to veto a comparable measure, citing concerns about Connecticut's competitiveness. His position has shifted enough that his office issued a statement this week calling SB5 a bill that "provides helpful clarity and promotes user safety in specific use cases" - which reads as qualified support rather than an endorsement.

The current bill was drafted with input from the governor's priorities. The regulatory sandbox and the AI Academy feature prominently because Lamont's office pushed for them as counterweights to the compliance requirements. That negotiation makes the governor less likely to veto a bill he helped shape - but it doesn't guarantee a signature if the House strips or weakens key provisions.

Where Connecticut Sits in the National Picture

Connecticut is not legislating in isolation. New York's RAISE Act has been on the books since March, requiring frontier AI developers to publish safety protocols and report incidents within 72 hours. SB5 shares some of that DNA - the frontier AI sections overlap in intent - but goes further into employment and consumer territory.

California's SB 1047 would have been far more aggressive, requiring kill-switch capabilities and third-party audits for models whose training cost exceeded $100 million. Governor Newsom vetoed it in 2024. California's SB 53, signed into law, takes a lighter touch: annual safety framework disclosures and 15-day incident reporting, with fines capped at $1 million. Connecticut's bill sits somewhere between those two poles.

The federal government has signaled it wants to own this space. The White House blueprint from March explicitly called on Congress to block state AI laws with a preemptive federal standard. That effort hasn't moved in Congress, which means state laws passed now could be wiped out by federal legislation passed later. Connecticut legislators know this, which is part of why the House will be cautious.

StateKey ProvisionScopeStatus
Connecticut (SB5)Employment AI notifications, chatbot safety, frontier AIFrontier devs + all employersPassed Senate, House pending
New York (RAISE Act)Safety protocols, 72-hour incident reportingFrontier AI developersSigned into law
California (SB 53)Annual safety frameworks, 15-day reportingLarge developers (10^26 FLOPs+)Signed into law
California (SB 1047)Kill switch, third-party audits, compute thresholdLarge frontier modelsVetoed (2024)

Industry Reaction

The Connecticut Business and Industry Association called the approach "more targeted" than earlier proposals but still flagged concerns about compliance burden. Chris Davis, the CBIA's vice president of public policy, acknowledged the bill "allows businesses to comply and also provide that consumer protection" - a careful formulation that declines to endorse the bill outright.

Sen. James Maroney, lead sponsor of Connecticut SB5 Sen. James Maroney (D-Milford), the lead sponsor of SB5, argued the bill addresses real harms from AI decision errors in areas affecting people's livelihoods. Source: senatedems.ct.gov

"We're putting in important protections. Sometimes these machines get things wrong."

  • Sen. James Maroney (D-Milford), lead sponsor of SB5

DECD Commissioner Daniel O'Keefe was more enthusiastic, saying the bill would "both strengthen AI governance and attract innovation to the state." That framing - regulation as a feature, not a bug - is the argument Connecticut's Democratic majority needs to make to keep moderate House members on board.

Sen. Paul Cicarella (R-North Haven), one of four Republicans who voted for the bill, offered the cleanest summary of the supporting case: "I think that this will do more good than any negative."


The vote count matters less than what it signals. A 32-4 Senate majority in a state with 4 million people has passed the most employment-focused AI regulation any US legislature has approved. The frontier AI provisions are thin compared to California's vetoed SB 1047, but the workplace protections and chatbot safety rules are concrete and enforceable. Connecticut's employers - especially those using AI in hiring - are watching Hartford closely. The question now is whether the House can be persuaded that a bill the governor helped design is worth passing before Congress decides to make the whole debate moot.

Sources:

Elena Marchetti
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.