White House Calls on Congress to Block State AI Laws
The Trump administration released a seven-point AI legislative blueprint on March 20, urging Congress to preempt all state AI regulations with a single federal standard.

The Trump administration on Friday released its most detailed push yet for Congress to govern artificial intelligence at the federal level, publishing a seven-point legislative blueprint that calls for streamlined data center permitting, new child safety obligations for AI platforms, and the explicit preemption of all conflicting state AI laws.
The document, titled the National AI Legislative Framework, landed the same week the White House signaled it wants Congress to move fast. David Sacks, the administration's AI czar, framed the framework as a direct response to regulatory chaos spreading across the country.
"This was in response to a growing patchwork of 50 different state regulatory regimes that threaten to stifle innovation and jeopardize America's lead in the AI race."
- David Sacks, White House AI and Crypto Czar
Four states - Colorado, California, Utah, and Texas - have already enacted their own AI rules. The administration wants them gone, replaced by a single national standard that AI companies, starting with OpenAI, Anthropic, and Google, could comply with once instead of navigating separate requirements in each jurisdiction.
What the Framework Actually Says
The blueprint lays out seven areas where it wants legislation:
- Protecting children and empowering parents - AI platforms likely to be accessed by minors must implement features to reduce sexual exploitation and self-harm. Parents get tools to manage accounts and device use.
- Safeguarding communities - Includes measures against AI-enabled scams and age verification requirements for certain platforms.
- Respecting intellectual property and supporting creators - The administration believes AI training on copyrighted material does not violate copyright law and prefers courts, not Congress, to resolve disputes.
- Preventing censorship and protecting free speech - Platforms cannot face liability for hosting AI-generated speech that users create; government coercion of tech providers is explicitly addressed.
- Enabling innovation and ensuring AI dominance - Regulatory sandboxes to allow developers to experiment under relaxed rules; reduced permitting friction for new deployments.
- Educating Americans and developing an AI-ready workforce - Unspecified education and training investments.
- Streamlining energy and data center permitting - Calls on Congress to let data centers produce power on site, and explicitly states that residential electricity rates shouldn't rise because of AI infrastructure expansion.
The preemption language is the load-bearing piece. The framework states that "the framework can only succeed if applied uniformly across the United States, as a patchwork of conflicting state laws would undermine American innovation and ability to lead in the global AI race."
David Sacks, the White House AI Czar, has described the state-by-state approach to AI regulation as a threat to US competitiveness.
Source: banking.senate.gov
Who This Is For - and Who It Is Not
Companies Get the Win They Lobbied For
Technology companies have spent heavily in Washington to achieve exactly this outcome. OpenAI and Anthropic together channeled over $125 million into the 2026 midterms through affiliated super PACs, with preemption as a central objective for both. A single federal standard, even with meaningful child safety requirements, is cheaper to comply with than operating across dozens of conflicting state regimes.
The liability language is especially favorable to industry. The framework instructs Congress not to hold AI developers responsible for third-party misuse of their models - a provision that would shield the major labs from the kind of accountability that existing state rules in California and New York were moving toward. Both California's SB 53 and New York's RAISE Act require AI companies to establish whistleblower protections, report safety events, and disclose model testing procedures.
The White House's own statement framed the goal plainly: "The Federal government is uniquely positioned to set a consistent national policy that enables us to win the AI race and deliver its benefits to the American people."
States Are Being Asked to Step Aside
The administration's position is a logical extension of Trump's December 2025 executive order, which created a DOJ task force to challenge state AI regulations that conflicted with federal priorities. That order drew legal challenges and significant pushback - including from Republican-controlled states.
Colorado's SB 205, which imposed requirements on high-risk AI systems, was the first substantial casualty. Utah and Texas passed narrower rules. California's legislative session produced two major AI bills. The White House has treated all of them as threats.
The problem for the administration is that it faces resistance from within its own coalition. Over 50 Republican lawmakers across 22 states have signed letters opposing the moratorium on state regulations, arguing it violates federalism principles.
Ohio Republican state senator Louis Blessing III put it directly:
"To sit there and have things like executive orders, saying, 'Hey, you can't legislate in this space unless we say it's ok, is blatantly unconstitutional, and frankly it's offensive. I think they are afraid of a massive onslaught, from frankly, some very wealthy people."
- Louis Blessing III, Ohio State Senate
Utah state representative Doug Fiefia, who supports federal action in principle, drew a sharper line: "There is a need and a want for a federal standard and I support it, but in the absence of it, there is a need to protect my constituents. Doing nothing is not the answer."
Civil Society Wants Specifics
Child safety champions, civil liberties groups, and consumer protection organizations broadly support some form of federal AI governance - but they're skeptical that a framework written with industry input will deliver meaningful protections.
The Dispatch's reporting on Republican frustration captured a sentiment that runs across the advocacy space: Jared Hayden of the Institute for Family Studies noted that "when it was introduced, a lot of the justification around the executive order was to stop 'woke' blue-state AI regulations, but what we have seen is that it is actually deterring red states from regulating AI."
Chris MacKenzie of Americans for Responsible Innovation put it differently: "There's a real groundswell of support within the Republican Party over protecting the ability to safeguard people from AI harm."
| Stakeholder | Impact | Timeline |
|---|---|---|
| AI companies (OpenAI, Anthropic, Google) | Single compliance standard replaces 50+ state rules | 12-24 months if Congress acts |
| State governments | Existing AI laws preempted; new rules blocked | Already constrained under Dec. 2025 EO |
| Content creators | Copyright disputes deferred to courts, not legislature | Open-ended |
| Energy utilities | Data centers allowed on-site generation; residential rates protected | Legislation required |
| Parents and children | New platform obligations for minors; parental controls mandated | Specifics TBD |
| Civil liberties groups | Federal standard preferred, but details matter | Watching Congress closely |
Congress will ultimately determine whether the White House blueprint becomes law - and the politics remain complicated on both sides of the aisle.
Source: commons.wikimedia.org
What Happens Next
The framework is not legislation. It's a wish list delivered to a Congress that has spent years failing to pass any thorough AI bill. Adam Thierer of the R Street Institute, who supports a federal approach, acknowledged the central contradiction: "The No. 1 pushback against the moratorium was, you can't preempt something with nothing."
Now the administration has provided something. Whether Congress acts on it is a separate question.
Sen. Ted Cruz, the Commerce Committee chair and the most prominent preemption advocate in the Senate, has been pressing for a federal bill since the executive order landed. Rep. Jay Obernolte of California has signaled his intent to introduce legislation codifying the CAISI framework. But the Senate's slim majority, internal Republican disagreements over federalism, and Democratic skepticism about industry-friendly liability language all narrow the path.
The White House says it "looks forward to working with Congress in the coming months to turn this framework into legislation that the President can sign." That timeline - "coming months" - is doing a lot of work.
The more immediate complication is Anthropic. The company is currently pursuing litigation related to government contracts, a conflict that sits awkwardly alongside the framework's framing of the AI industry as a unified American asset worth protecting. At the same time, the Blackburn bill targeting fair use for AI training moves through its own legislative track, separate from this framework and potentially in tension with it on copyright.
The March 11 federal deadlines tied to the original executive order have already passed without most states complying. Whether Friday's framework resolves that standoff, or simply documents it, will depend on what happens between the White House and Capitol Hill over the next few months.
Sources: White House | NBC News | The Dispatch | Roll Call | Washington Today | R Street Institute
