News

Hollywood's Open Secret: Studios Are Using Far More AI Than They Admit

Industry insiders reveal a pervasive 'don't ask, don't tell' culture around AI in Hollywood, where studios use generative AI extensively while publicly downplaying it, screenwriters quietly rely on Claude and ChatGPT, and every 2026 Best Picture nominee allegedly used AI in production.

Hollywood's Open Secret: Studios Are Using Far More AI Than They Admit

"Everyone's lying just a little bit." That is how Janice Min, former editor of The Hollywood Reporter and current CEO of Ankler Media, described the state of AI in Hollywood during a Business Insider interview this month. "Studios are lying about how much they're using it."

When asked whether studios were using AI more or less than they claim: "Using it more."

Min went further, asserting with "some certainty that every single best picture nominee this year has used AI in its production process." She described the Academy as maintaining "basically a don't ask, don't tell policy" regarding AI, and challenged anyone to "find a screenwriter who is staring at a blank page and not talking to Claude or ChatGPT at the same time."

Her claims were corroborated by Hollywood talent agent Ryan Hayden, who told reporters: "A lot of people want plausible deniability right now."

This is not speculation from outsiders. These are industry insiders describing a systemic culture of concealment - one where AI is used pervasively in production while being denied publicly, where artists are asked to "launder" AI-generated content by redrawing it to obscure its machine origins, and where the gap between what studios say and what studios do grows wider with every production.

TL;DR

  • Ankler Media CEO Janice Min says studios are using AI more than they admit, and every 2026 Best Picture nominee used AI in production
  • The Academy Awards operates under a "don't ask, don't tell" AI policy, with mandatory disclosure only now being considered for the 98th Oscars
  • Specific caught cases include Marvel's Secret Invasion AI credits, The Brutalist's AI accent enhancement, and Emilia Perez's AI voice cloning
  • Disney invested $1 billion in OpenAI while simultaneously suing Midjourney for copyright infringement
  • A 2024 study estimates 204,000 entertainment jobs adversely affected by AI over three years; concept artists and VFX workers are hit hardest
  • Union contracts from the 2023 strikes include AI protections, but enforcement is nearly impossible when usage is hidden
  • Studio contracts with writers and actors expire again in 2026, with AI expected to be the dominant negotiation issue

The Evidence Trail

Min's claims would be easy to dismiss as industry gossip if not for the growing list of specific cases where studios were caught using AI - often after publicly denying it or simply staying silent.

The Brutalist (2025 Oscars)

The Oscar-contending period drama became the center of the AI disclosure debate when film editor David Jancso revealed that Respeecher AI technology was used to enhance the Hungarian accents of lead actors Adrien Brody and Felicity Jones. Director Brady Corbet defended the performances as "completely their own," noting the actors worked for months with a dialect coach. The controversy did not stop Brody from winning Best Actor - but it raised the uncomfortable question of where a human performance ends and AI augmentation begins.

Emilia Perez (2025 Oscars)

Netflix's "Emilia Perez" - winner of four Golden Globes and recipient of 13 Oscar nominations - used Respeecher AI voice cloning to help star Karla Sofia Gascon sing notes outside her vocal range, merging her voice with French pop star Camille. Sound mixer Cyril Holtz explained Gascon's voice was too low for certain parts of the score due to her ongoing medical transition. Critics argued the AI-assisted vocal performance should have disqualified the film from performance-based awards.

Two of the most prominent films at the 2025 Oscars used AI in ways that directly affected the performances for which they were nominated. Neither disclosed it proactively.

Marvel's Secret Invasion (2023)

The Disney+ series featured AI-generated opening credits created via Method Studios, confirmed by director Ali Selim. Visual development artist Jeff Simpson, who worked on the show, publicly stated he believes "AI to be unethical, dangerous and designed solely to eliminate artists' careers." Method Studios claimed "No artists' jobs were replaced." The controversy erupted during the concurrent WGA strike, amplifying its symbolic significance.

True Detective: Night Country (2024)

HBO's reboot featured what appeared to be AI-generated band posters in background set dressing, with telltale signs including nonsensical text ("2st LIVE") and headless figures. Showrunner Issa Lopez claimed the posters were intentionally AI-looking because "it's so sad up there that some kid with AI made the posters" - a creative justification that conveniently avoided confirming whether AI was actually used to produce them.

Golden award statuette with star confetti The Academy currently offers only an optional AI disclosure form. Mandatory disclosure is being considered for the 98th Oscars in 2026.

Disney: Sue AI Companies, Then Invest a Billion in One

No company better illustrates Hollywood's AI hypocrisy than Disney.

The studio has been caught in multiple AI incidents: AI-generated promotional materials for Disneyland Paris theme park art, Oogie Boogie Bash, and Disney+ marketing for The Muppet Show (featuring errors like Miss Piggy with gloveless hands). Disney also used digital actor replacements in theme park content.

The cognitive dissonance peaked in December 2025 when Disney made a $1 billion investment in OpenAI, licensing over 200 Disney, Marvel, Pixar, and Star Wars characters for Sora video generation. CEO Bob Iger framed it as strategic inevitability: "We'd rather participate in the rather dramatic growth, rather than just watching it happen and essentially being disrupted by it." He called AI "the most powerful technology that our company has ever seen" and revealed plans for Disney+ to become a portal for AI-generated user content.

This from the same company that had sued Midjourney for copyright infringement over unauthorized use of its characters in AI training data. Disney's position, stripped of PR language: AI companies cannot use our content to train models, but we can invest a billion dollars in the biggest AI company to generate content with everyone else's training data.

The Executive Playbook: Say "Tool," Mean "Replacement"

Hollywood executives have developed a consistent rhetorical strategy: frame AI as a creative "tool" in public while making moves that suggest they see it as a workforce replacement technology.

Ted Sarandos (Netflix Co-CEO) told Fortune in May 2024 that showrunners and screenwriters "better start using AI or else someone who does will take their job." By October 2025, Netflix had declared it was going "all in" on generative AI. Sarandos publicly insisted AI will make content "better, not just cheaper."

Bob Iger (Disney CEO) has told the Hollywood Reporter that AI will "enhance and enable" creativity - while presiding over an unprecedented investment that puts Disney's entire character library into the hands of a generative AI company.

Ben Affleck was more blunt: "I wouldn't want to be in the VFX business. They're in trouble."

Tyler Perry halted an $800 million studio expansion in Atlanta after seeing OpenAI's Sora capabilities, while admitting he had already used AI for de-aging in two of his own films: "I think this will touch every corner of the industry."

The Lionsgate Precedent

In September 2024, Lionsgate became the first major studio to formalize AI usage by partnering with Runway to build a custom AI model trained on the studio's proprietary film and television archive. Executives discussed the potential to reduce production costs from $100 million to $50 million. The CEO described the deal as having "transformational impact."

The initial use case was described as "pre-visualization and storyboarding" - the kind of framing that minimizes the perceived threat. But building a custom generative model trained on an entire studio archive is not a storyboarding tool. It is infrastructure for replacing human creative labor at scale.

Video editor working at dual monitors with color grading software VFX artists and concept artists have been hit hardest by AI adoption, with computer graphic artist positions declining 33% in 2025 alone.

204,000 Jobs and Counting

The human cost of Hollywood's secret AI adoption is not theoretical. It is measurable and accelerating.

A February 2024 study surveyed 300 entertainment industry leaders and found that three-fourths indicated AI tools supported job elimination, reduction, or consolidation. The study estimated 204,000 positions adversely affected over three years.

A guild-commissioned report predicted that 21% of US film, TV, and animation jobs will be "consolidated, replaced, or eliminated" by AI by 2026 - more than 100,000 of the nation's 550,000 jobs in these sectors.

The numbers on the ground are already moving:

CategoryImpact
Computer graphic artists12% job decline in 2024, then 33% decline in 2025
Concept artistsArt outsourcing companies report laying off half of their concept artists
PixarLaid off 14% of workforce (175 workers), after previously cutting 75
DreamWorks AnimationCut 40 employees
Netflix AnimationLaid off 50 employees (one-third of feature animation division)
Technicolor/DNEGLaid off hundreds of VFX specialists
Paramount GlobalShuttered television studio entirely in 2024

Concept artists have been hit hardest. Animation studios have fired workers in visual development because they could get "way more visual development using Midjourney and then just have one or two artists curating the work." Senior concept artists report employers asking them to "fix" character designs generated using AI tools rather than create original work, dramatically reducing their billable hours.

Background actors have faced their own nightmare. During Disney+'s WandaVision, background actors had their faces and bodies scanned for about 15 minutes each to create digital replicas - without clear explanation of how those replicas would be used. This became a central issue in the SAG-AFTRA strike.

Voice actors are under similar threat. Two professional voice actors sued Lovo, an AI text-to-speech company, for creating AI clones of their voices under false pretenses - Lovo had solicited recordings promising "non-commercial, internal research" then commercialized them. In May 2025, SAG-AFTRA filed an unfair labor practice charge against Llama Productions (Epic Games) over the AI-generated James Earl Jones voice for Darth Vader in Fortnite.

What the Unions Won - And Why It May Not Matter

The 2023 Hollywood strikes - the longest in the industry's history - produced the first-ever contractual AI protections for writers and actors. On paper, they are significant.

SAG-AFTRA and WGA strike picket line with protest signs The 2023 WGA and SAG-AFTRA strikes produced landmark AI protections, but enforcement depends on studios disclosing their AI usage - the very thing they are hiding.

WGA Protections

The Writers Guild agreement established that:

  • Studios cannot require writers to use AI
  • AI cannot be credited as a "writer"
  • AI-generated material cannot undermine a writer's credit or separated rights
  • Studios must disclose when materials provided to writers are AI-generated
  • Semi-annual meetings between studios and the WGA to discuss AI developments

SAG-AFTRA Protections

The actors' agreement requires:

  • Explicit, informed consent before creating or reusing digital replicas
  • At least 48 hours' notice before any digital replica proposal
  • Separate negotiations for each digital replica use, plus a day rate payment
  • Background actors must be informed before body/face scanning
  • In May 2025, SAG-AFTRA ratified what it called the "strongest contractual AI guardrails achieved to date"

The Enforcement Problem

These protections share a fatal flaw: they depend on studios being honest about their AI usage. If studios are systematically hiding how and where they use AI - which is exactly what Min, Hayden, and multiple industry sources describe - then contractual protections are unenforceable. You cannot file a grievance over AI usage you do not know about.

The "laundering" practice makes this particularly insidious. When studios have AI generate concept art, then ask a human artist to redraw it just enough to obscure its origins, the resulting work appears human-made. No disclosure is triggered. No consent is required. No union protection applies.

Both contracts expire again in 2026. AI is expected to dominate the negotiations - and potentially trigger another labor stoppage if studios refuse to close the enforcement gaps.

The Regulatory Landscape

Governments are beginning to act, but the frameworks have significant gaps:

The Academy Awards currently offers only an optional AI disclosure form. After the Brutalist and Emilia Perez controversies, the Academy is investigating mandatory disclosure for the 98th Oscars (2026). New language states that AI tools "neither help nor harm the chances of achieving a nomination," with each branch judging "the degree to which a human was at the heart of the creative authorship."

California's AB 412 (AI Copyright Transparency Act) would require developers to document copyrighted materials used in AI training data, with penalties of $1,000 per violation per day. The Senate has pushed it to a two-year measure, effectively pausing it.

New York's AI Disclosure Law (effective June 9, 2026) requires conspicuous disclosure when synthetic performers appear in advertisements - but exempts motion pictures, television programs, and streaming content entirely.

The EU AI Act (deepfake labeling effective August 2, 2025) requires disclosure of AI-generated content that appears authentic, but includes a derogation for "artistic, creative, satirical, fictional or similar works". Fines for violations: up to 35 million euros or 7% of global turnover.

Every regulatory framework has carved out exceptions for exactly the use cases Hollywood cares about. The result is an industry with no binding obligation to disclose AI usage in the content audiences actually watch.

The Emerging Threats

The landscape is getting worse, not better.

ByteDance's Seedance 2.0 (February 2026) generates cinema-quality video from text prompts. Viral clips featured Spider-Man, Deadpool, and other copyrighted characters, prompting the MPA's Charles Rivkin to accuse ByteDance of "unauthorized use of U.S. copyrighted works on a massive scale".

An AI-generated synthetic "actress" named Tilly Norwood sparked outrage in September 2025 when reports suggested the character might be signed to a talent agent.

Staircase Studios announced plans to produce 30 AI-generated movies in four years. The first feature-length AI-generated film, "The Sweet Idleness," released a teaser in October 2025.

On music streaming platform Deezer alone, 13.4 million AI-generated tracks were detected in 2025. Daily deliveries of AI-generated music averaged 60,000 tracks in January 2026 - roughly 39% of all music delivered daily.

The Real Question

The Futurism article covering Min's comments correctly noted that "AI could describe a multitude of tools that aren't necessarily generative AI" and that "artists tend to be ardently against AI, perhaps more so than any other field." This skepticism is warranted. Not every use of machine learning in post-production is the same as replacing a human performance with a generated one.

But the evidence across dozens of incidents, executive statements, investment decisions, and worker impact data points in a single direction: Hollywood is using generative AI more extensively than it publicly acknowledges, and it has strong financial incentives to keep doing so.

The industry has learned from the Brutalist and Secret Invasion backlashes. Not to stop using AI - but to stop getting caught. The "don't ask, don't tell" policy Janice Min describes is not a failure of governance. It is a strategy. Studios get the cost savings of AI automation while maintaining the human artistry brand that audiences and award bodies value.

The losers in this arrangement are the workers whose jobs are being quietly automated, the audiences who do not know what they are watching, and the award nominees whose human achievements are being judged against AI-augmented competition they cannot see. The 2026 contract negotiations will determine whether unions can close the enforcement gap - or whether Hollywood's open secret becomes its permanent operating model.


Sources:

Hollywood's Open Secret: Studios Are Using Far More AI Than They Admit
About the author Podcast Co-Host & AI Research Analyst

Maya is a machine learning researcher turned science journalist with a deep understanding of what happens inside the models everyone is talking about.