Pro-Human AI Declaration Unites Left, Right, and Labor
A bipartisan coalition of 40+ groups - from the AFL-CIO to the Congress of Christian Leaders - released a 34-point declaration demanding human control over AI, corporate accountability, and a ban on autonomous lethal weapons.

Steve Bannon and Susan Rice don't agree on much. On March 4, they signed the same document.
The Pro-Human AI Declaration, released Wednesday by the Future of Life Institute, is a 34-point framework demanding human oversight of AI, criminal liability for technology executives, and a ban on systems capable of making autonomous lethal decisions. It carries signatures from Turing laureate Yoshua Bengio, Nobel economist Daron Acemoglu, AFT president Randi Weingarten, consumer advocate Ralph Nader, AI researcher Stuart Russell, and more than 40 organizations ranging from the AFL-CIO Tech Institute to the Congress of Christian Leaders.
"The path we're on right now is this race to replace, where you have a small number of incredibly powerful companies very openly saying that they want to build superintelligence, which, by definition, can replace every human job."
- Max Tegmark, Future of Life Institute president
TL;DR
- Released March 4, 2026 by the Future of Life Institute after a secret drafting meeting in New Orleans in January
- 34 principles across five pillars: human control, anti-monopoly, family and childhood protection, personal liberty, and corporate accountability
- Signatories include Steve Bannon, Susan Rice, Yoshua Bengio, Daron Acemoglu, Richard Branson, Randi Weingarten, Ralph Nader, Tristan Harris, Meredith Whittaker
- Specific proposals: ban AI legal personhood, mandatory AI content labeling, criminal penalties for tech executives
- Sam Altman and Elon Musk were deliberately not invited
A Coalition Built to Surprise
The drafting process started months before the release. About 90 political and community leaders gathered secretly at a New Orleans hotel in January 2026 to negotiate the final text. Max Tegmark's Future of Life Institute served as convener, but the institute deliberately chose not to hand-pick ideological allies.
Who Signed and Why
The signatories span labor, faith, national security, and business in ways that don't usually overlap. The AFL-CIO Tech Institute and SAG-AFTRA signed with the G20 Interfaith Forum Association and the Congress of Christian Leaders. On the individual side, progressive organizations shared the declaration with former Trump strategist Steve Bannon and conservative commentator Glenn Beck.
Joe Allen, co-founder of Humans First Coalition, explained the logic: "I think about it like, if there's knowledge that there's poison in the water supply - most people are going to be against it and it isn't partisan."
Randi Weingarten, whose union represents 1.8 million educators, described the alignment as a surprise: "We've been on parallel tracks for quite a while without knowing it."
The Pro-Human AI Declaration was drafted at a closed-door gathering of 90 leaders in New Orleans in January 2026, bringing together groups that rarely share the same room.
Source: pexels.com
Who Wasn't Invited
Sam Altman was not in the room. Nor was Elon Musk. No representative from any major AI laboratory attended. That exclusion was intentional - the coalition's organizers wanted a document that could claim genuine independence from the industry it aims to regulate.
Alan Minsky, CEO of Progressive Democrats of America, said the declaration responds to tech leaders' "utter contempt for the average person's welfare."
| Stakeholder | Impact | Timeline |
|---|---|---|
| OpenAI, xAI, Anthropic | Liability exposure; potential criminal penalties for executives if legislation passes | 2-4 years |
| Workers and creatives | New protections against AI-driven displacement if adopted | 1-3 years |
| Congress | Bipartisan cover for AI legislation from both parties | 6-12 months |
| State legislatures | Ready-made framework for model AI laws; preemption battles likely | Near-term |
| AI investors | Regulatory risk premium on frontier model development | Immediate |
What the Declaration Actually Requires
The five pillars translate into concrete proposals. This isn't an appeal to values - it's a list of specific prohibitions and mandates.
Human Control
The declaration prohibits "autonomous lethal weapons powered solely by AI" and requires human oversight of consequential decisions. It calls for mandatory shutdown mechanisms on frontier systems and blocks any path to superintelligence development until there's broad scientific consensus the process can be done safely. Given that OpenAI, Anthropic, and xAI have all explicitly stated superintelligence is their goal, this pillar puts the declaration in direct conflict with the labs' stated roadmaps.
Anti-Monopoly
"No AI Monopolies" is one subsection heading. The declaration calls for democratic input on major AI transitions and shared distribution of economic gains from automation. February polling cited by the coalition found 80% of respondents support human control over AI decisions. The least popular principle - preventing monopolies - still got 69% support.
Corporate Accountability
This is the pillar with the sharpest edge. The declaration proposes criminal liability for executives whose products cause serious harm, independent safety standards free from industry influence, and mandatory pre-deployment testing for consumer-facing chatbots. It would also ban the industry from obtaining regulatory carve-outs through lobbying - a direct response to the campaigns AI companies are running in Washington right now.
The declaration also bans AI from being granted legal personhood. That sounds theoretical, but it isn't: the question of AI legal status has already appeared in patent filings, contract negotiations, and at least one federal lawsuit.
The timing wasn't accidental. The declaration was finalized weeks before the Anthropic-Pentagon standoff that dominated AI news in February, but Tegmark told TechCrunch the collision of the two events "wasn't lost on anyone involved." That standoff - which we've followed from the initial Pentagon blacklisting of Anthropic through the company's lawsuit against the Defense Department - made the question of who controls AI systems feel suddenly concrete.
The coalition's release also came just weeks after Anthropic quietly dropped its flagship safety pledge, removing the hard stop it had previously committed to in its Responsible Scaling Policy. And multiple prominent AI safety researchers have left the major labs over the past year - a departure pattern we covered in depth - adding weight to the coalition's argument that internal governance alone isn't working.
What Happens Next
The declaration isn't legislation. It has no enforcement mechanism and no automatic path to becoming law. What it does is political: it gives members of Congress on both sides of the aisle a document they can point to when pushing AI regulation, and it applies pressure on the White House from right and left simultaneously.
Two concrete deadlines arrive in three days. The Secretary of Commerce must publish an assessment of state AI laws by March 11 - identifying which state-level regulations the Trump administration considers burdensome. The FTC chairman must issue a parallel policy statement on the same date. Both deliverables stem from Trump's December 2025 executive order and could accelerate or complicate the federal regulatory picture before any legislation based on the Pro-Human Declaration gets traction.
The coalition plans to use the declaration in those fights. Organizers are already lobbying members of Congress on both sides of the aisle, and they expect the March 11 Commerce assessment to create immediate conflicts with state-level AI laws the declaration's signatories support.
Whether the coalition holds once specific legislation arrives is an open question. Bannon and Weingarten agreeing on a preamble is one thing. Agreeing on a bill text - with carve-outs, definitions, and enforcement mechanisms - is a different matter completely.
Sources:
- The Pro-Human AI Declaration - humanstatement.org
- Pro-human AI declaration brings together unlikely group - NBC News
- A roadmap for AI, if anyone will listen - TechCrunch
- Left, right and faithful unite to demand human control over AI - Canadian Affairs News
- AI's Political Resistance Rises - AIChief
