California AI Order Defies Trump on Privacy and Safety

Governor Newsom signed EO N-5-26 on March 30, requiring AI vendors seeking California state contracts to certify safeguards on privacy, bias, and civil liberties - directly countering the Trump administration's push to strip state AI authority.

California AI Order Defies Trump on Privacy and Safety

On March 30, Governor Gavin Newsom signed Executive Order N-5-26, putting California on a collision course with the Trump administration over who controls the rules for AI in America. The order requires any company seeking a state contract to certify that its AI systems don't create illegal content, exhibit harmful bias, or undermine civil liberties. It also gives California the authority to override federal supply chain risk designations for its own procurement - a direct response to the Anthropic-Pentagon standoff that's been playing out in federal court for months.

"California leads in AI, and we're going to use every tool we have to ensure companies protect people's rights, not exploit them or put them in harm's way. While others in Washington are designing policy and creating contracts in the shadow of misuse, we're focused on doing this the right way."

  • Governor Gavin Newsom, March 30, 2026

TL;DR

  • Newsom signed EO N-5-26 on March 30, effective right away
  • AI vendors must certify safeguards on CSAM, harmful bias, and civil liberties to qualify for state contracts
  • California's CISO can now override federal supply chain bans - aimed squarely at the Anthropic situation
  • 120-day window gives agencies until late July to draft implementing recommendations
  • The order opens a direct federal-state conflict on AI procurement authority

What the Order Actually Says

The EO has five operative sections, each with a distinct purpose.

Section 1 - Vendor Certification

The Department of General Services and the California Department of Technology have 120 days to submit recommendations for a new certification framework. Companies that want to do business with the state will need to attest that their AI systems have safeguards against:

  • Generating child sexual abuse material or non-consensual intimate imagery
  • Displaying harmful bias or lacking governance to reduce bias risk
  • Violating civil rights such as free speech, voting, human autonomy, and protections against unlawful discrimination and surveillance

There's an important caveat here: the 120-day deadline produces recommendations, not binding rules. Actual regulatory teeth require subsequent rulemaking, so the certification framework doesn't snap into place on day 121.

Section 2 - The Federal Override Clause

This is the most politically aggressive provision. The state's Chief Information Security Officer can review any federal government designation of a company as a supply chain risk. If the CISO concludes the designation is improper, the state's technology and procurement agencies will issue guidance allowing state departments to continue buying from that company.

The context is unmistakable. The Pentagon's designation of Anthropic as a supply chain risk - and Anthropic's subsequent court win blocking that designation - has been the central AI procurement drama of early 2026. California is now saying it won't automatically defer to Washington's vendor blacklists.

Governor Newsom speaking at an AI workforce partnership announcement Newsom at a 2025 AI workforce partnership event with Google, Adobe, IBM, and Microsoft - the same companies named as partners under the new EO. Source: commons.wikimedia.org

Section 3 - Contractor Responsibility Reforms

The Government Operations Agency has 120 days to recommend reforms to contractor suspension and ineligibility authorities. The target: companies that have been judicially determined to have unlawfully undermined privacy or civil liberties. Unlike Section 2, this one cuts toward bad actors rather than protecting legitimate vendors from political blacklists.

Section 4 - Expanding State AI Use

This section reads partly as a counterweight to the restrictions elsewhere: the state also wants to deploy AI more aggressively. Agencies are directed to expand employee access to vetted generative AI tools, build a pilot app or website using GenAI to give Californians streamlined access to state services organized by life event, and publish a data minimization toolkit for departments handling sensitive data.

Section 5 - Watermarking Standards

CDT has 120 days to issue guidance for watermarking AI-generated or clearly manipulated images and video, following California Business & Professional Code sections 22757.2 and 22757.3. The order doesn't mandate a specific technical standard like C2PA; it defers to CDT to define what "industry best practices" means in practice.

Impact Assessment

StakeholderImpactTimeline
AI vendors (all)Must certify AI safeguards to qualify for CA contracts~July 28 when recommendations due
Vendors on federal blacklistsCA can override; state contracts remain accessibleImmediate (ongoing CISO review)
State agenciesMust expand employee GenAI access; adopt watermarking guidance120 days
WorkersBroader AI tools access; new government service interfacesLate 2026 rollout
Federal governmentDirect sovereignty challenge on procurement authorityOngoing legal tension

Who Benefits

California's economy is the world's fourth largest. The state is home to 33 of the top 50 privately held AI companies globally, accounts for 25% of all AI patents and conference papers, and captured 51% of all U.S. AI startup funding from Q3 2024 to Q2 2025, according to Carta data cited in the EO's press materials.

That market power is the point. When California puts certification requirements on state contracts, it's not mandating behavior in a small regulatory backwater - it's setting a potential template for a $4 trillion economy.

The companies that benefit most from Section 2 are those caught in the crossfire between federal procurement politics and legitimate commercial activity. Anthropic has been navigating exactly that tension since the Pentagon dispute began, and California is now positioning itself as an alternative procurement authority for companies that find themselves on Washington's wrong side.

The California State Capitol building in Sacramento The California State Capitol in Sacramento, where the new EO was formally attested. Source: commons.wikimedia.org

Who Pays

The vendor certification framework creates compliance costs that fall unevenly. Large incumbents - the Googles, Microsofts, and IBMs named as workforce training partners in the EO itself - can absorb a new attestation requirement without much difficulty. Smaller AI companies without dedicated legal and compliance teams face a different calculation.

There's also a political cost. Any AI company that publicly engages with California's new procurement standards will be handing the Trump White House a data point in its argument that tech companies are choosing partisan alignment over federal partnership.

The Federal-State Collision

The EO is explicit about its motivations. Newsom posted on X: "While Trump pressures companies to deploy AI for autonomous weapons and domestic surveillance, California is using our power...to raise the bar on privacy and security."

The White House unveiled a national AI framework in March 2026 favoring a light-touch approach, following sustained lobbying from large tech companies. The White House AI blueprint explicitly aimed at preempting state AI laws. California's EO is a structured refusal to comply with that preemption logic.

The legal terrain is unsettled. Federal preemption arguments usually require Congress to act - executive orders don't automatically override state law. But the March 11 federal deadline on state AI laws signaled that Washington intended to test that argument. California is now forcing the question.

What Happens Next

The 120-day clock starts now. Expect DGS and CDT to publish draft certification frameworks in July, followed by a public comment period before any requirements become binding on vendors.

Section 2 is the one to watch soon. If the federal government names another AI company as a supply chain risk, California's CISO can move quickly - the section carries no fixed deadline. Whether that authority holds up against a federal legal challenge is the open question.

The Trump AI executive order's division within the GOP suggests the federal stance on AI regulation isn't monolithic. California may find more allies than the current temperature suggests - or the dispute could escalate into one of the more consequential federal-state legal battles in recent tech history.

The EO itself is careful to note that it "does not create any rights or benefits...enforceable at law or in equity, against the State of California." That boilerplate limits its immediate legal force. But what it does create is a clear political signal and a 120-day process that'll produce concrete recommendations - and from there, rules with real teeth.


Sources:

Last updated

California AI Order Defies Trump on Privacy and Safety
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.