Blackburn's 300-Page AI Bill Ends Fair Use for Training
A 300-page Senate discussion draft would declare AI training on copyrighted works is not fair use, sunset Section 230, and preempt 38 state AI laws.

Senator Marsha Blackburn released a nearly 300-page discussion draft on March 18 that would, if enacted, reshape the legal foundation every major AI company in the United States is currently standing on.
The draft is called the TRUMP AMERICA AI Act - formally, "The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act." The acronym isn't accidental. The bill is organized around four political priorities that Blackburn calls the "4 Cs": Children, Creators, Conservatives, and Communities.
"Congress must answer his call to establish one federal rulebook for AI to protect children, creators, conservatives, and communities. Instead of pushing AI amnesty, President Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation."
- Sen. Marsha Blackburn (R-Tenn.), March 18, 2026
The draft responds directly to Trump's December 2025 executive order directing Congress to create federal AI standards - and explicitly to preempt the 38+ state AI laws now in effect. Whether it gets there is a separate problem entirely.
Impact at a Glance
| Stakeholder | Impact | Timeline |
|---|---|---|
| AI labs (OpenAI, Google, Anthropic, Meta) | Training on copyrighted works no longer qualifies as fair use; face product liability suits | From enactment |
| Content creators and rights holders | Can request administrative subpoenas; new voice and likeness rights | From enactment |
| Platforms and social media | Section 230 protections sunset two years after enactment | 2 years post-enactment |
| State legislators | 38+ state AI laws preempted by federal framework | From enactment |
| Publicly traded AI companies | Quarterly labor reporting requirements to Department of Labor | From enactment |
What the Bill Actually Does
The Copyright Provision - the Biggest Exposure
The single most consequential line in the draft for AI companies: unauthorized computational processing of copyrighted works "for the purpose of training, fine-tuning, developing, or creating AI does not constitute fair use under the Copyright Act."
Every frontier model in production was trained on copyrighted material. OpenAI, Google DeepMind, Anthropic, Meta - all have relied on fair use as the central legal defense in copyright suits filed by publishers, music labels, news organizations, and authors. The Britannica and Merriam-Webster lawsuit against OpenAI is one high-profile example of the litigation currently making its way through courts. This provision, if enacted, removes that defense from now on.
The draft also lets copyright holders request administrative subpoenas to determine whether their work was used in AI training - a discovery mechanism the creative industries have wanted for years. It arrives bundled with a provision making AI-derivative works ineligible for fair use or copyright protections of their own.
Sen. Marsha Blackburn (R-Tenn.) has pushed for federal AI legislation for several years. This is the most thorough draft her office has released.
Source: commons.wikimedia.org
Section 230 - the Sunset Clause
The draft includes Sen. Lindsey Graham's legislation to sunset Section 230 two years after enactment. Section 230 is the 1996 provision that shields platforms from liability for user-created content. Removing it would expose every AI company running user-facing products to product liability lawsuits for content their models generate.
This goes further than previous Blackburn drafts, which called for Section 230 reform rather than outright elimination. Two years is a short runway for companies to restructure content moderation and product liability exposure.
Child Safety and Duty of Care
The bill bans AI companion chatbots for users under 17, requires age verification for all AI chatbot users, creates criminal penalties for sexually explicit AI chatbot conversations with minors, and prohibits market research on children under 13. It includes the Kids Online Safety Act (KOSA) text and imposes a general "duty of care" on AI developers for "reasonably foreseeable" harms - which opens a new track for civil suits that doesn't require the copyright or Section 230 provisions to be in play.
The Conservative Audit Requirement
This is where the bill becomes openly political. High-risk AI systems would face mandatory third-party audits to screen for "viewpoint or political affiliation discrimination." Federal government AI procurement would be restricted to models that are "neutral" and "do not manipulate responses in favor of ideological biases" - language that mirrors Trump's executive orders against what his administration has called "woke AI."
The Center for Data Innovation, a tech-policy think tank, was blunt: the proposal is "less a legislative foundation for governing AI and more a mood board for a set of long-standing grievances with Big Tech."
Who Pays the Price
AI Companies
The fair use elimination is the most direct threat. The copyright suits working through courts for the past two years - cases that OpenAI, Anthropic, and Google have been fighting partly on fair use grounds - become much harder to defend if this passes. ByteDance already suspended its Seedance video tool under Hollywood copyright pressure, and the bill would only harden that trend across the industry.
Product liability exposure under the Josh Hawley provision allows suits for mental anguish, financial injury, and property damage caused by "defective" AI design. The U.S. Attorney General, state AGs, and private parties could all bring claims. The economic exposure here is difficult to model. It's not small.
Creators and Users
Copyright holders get the most tangible wins: subpoena rights, explicit IP protection, and voice/likeness protections under the added NO FAKES Act. That provision gives individuals the right to license their voice and visual likeness for digital replicas and prohibits unauthorized replicas.
Users under 17 face access restrictions on AI chatbots. Age verification requirements create compliance costs for every consumer-facing AI product. The companies that absorb those costs will pass them on.
State Legislators
Any state that has passed AI regulation - California, Colorado, Texas, and 35 others - would see those laws preempted. This is what AI lobbyists have wanted since 2024. The catch: the federal framework replacing state law is far more restrictive than most of what the states have enacted. The industry asked for preemption. This version of it isn't what they had in mind.
The bill would need significant Democratic support to clear the Senate's 60-vote threshold for cloture.
Source: commons.wikimedia.org
What Happens Next
This is a discussion draft, not a formal bill introduction. It hasn't been referred to any committee. Given its scope, it falls under multiple committee jurisdictions - Commerce, Judiciary, and Health at minimum.
The White House responded carefully: "We continue to have productive conversations with legislators as we work with Congress towards delivering national AI legislation." That's neither a yes nor a no.
Bipartisan sponsorship is real - Coons, Welch, Durbin, Hawley all contributed legislation. But the bill was assembled to create a negotiating table, not to pass as written. The copyright provision will draw the most aggressive lobbying from AI companies. The Section 230 sunset will alarm the broader platform industry. The conservative audit requirement will complicate Democratic support.
Internationally, the context is significant. The UK shelved its AI copyright exception the same week, leaving creators and AI companies there without a legal framework. The EU delayed its high-risk AI compliance deadlines to late 2027. The US is moving faster, but getting from a 300-page discussion draft to law requires 60 Senate votes in a compressed legislative calendar.
The copyright fight is the one to watch. Whether AI training ends up legally distinct from fair use - or whether a narrower carve-out emerges for non-commercial research or synthetic data pipelines - will shape how frontier labs operate for the next decade. That fight starts here.
Sources: Roll Call · Sen. Blackburn Official Announcement · Center for Data Innovation · Music Ally · Engadget · Deadline
