North Korea Targets Europe with AI Deepfake Workers

DPRK operatives use real-time deepfake video and LLM-generated CVs to pass European hiring pipelines, funneling income back to Pyongyang's weapons programs.

North Korea Targets Europe with AI Deepfake Workers

North Korea's state-sponsored IT worker scheme has expanded into Europe, with operatives using real-time deepfake video, AI-generated CVs, and voice changers to pass job interviews and infiltrate tech companies across the UK, Germany, Portugal, Poland, and Romania. Google Threat Intelligence Group published findings this week confirming the geographic shift, driven by mounting US law enforcement pressure that has made American targets harder to crack.

The scheme has run for years. DPRK operatives take remote developer roles at Western companies under fabricated identities, then funnel the income back to Pyongyang. A December 2024 DOJ indictment named 14 individuals for producing at least $88 million over six years. That prosecution, combined with a series of FBI raids on US-based laptop farms, has pushed operators to look for softer ground in Europe - where institutional awareness is weaker and the due diligence frameworks that US tech companies developed under regulatory pressure don't yet exist.

TL;DR

  • Google GTIG confirmed DPRK IT workers are actively targeting European companies in at least five countries
  • Deepfake video filters, voice changers, and LLM-generated documents are standard tools in the hiring pipeline attack
  • Mandiant estimates more than 3,000 DPRK-affiliated workers operating inside Western companies, producing over $600M annually
  • Caught operatives have threatened to leak proprietary code since October 2024, adding an extortion layer

"Recruitment has not naturally been seen as a security issue," said Jamie Collier, principal threat intelligence adviser at Google GTIG for Europe. In one documented case, a DPRK operative was described by their employer as "one of our best employees" before investigators identified them.

How the Scheme Works

The AI Toolkit

The operation runs on a stack of commercial tools that are widely available and cheap to use. Real-time deepfake video filters and Face Swap apps swap a DPRK operative's face onto a stolen identity during video interviews. Voice-changing software masks regional accents on calls. LLMs generate culturally appropriate names, CVs, cover letters, and email styles tuned to the target country's conventions. AI-powered applicant tracking system bypass tools help clear automated screening.

Deepfake usage in detected infiltration attempts increased 700% since 2024, according to data from Okta. One actor identified in the GTIG report maintained at least 12 separate personas simultaneously, all targeting defense and technology roles across European markets. Alex Laurie, CTO at Ping Identity, noted that AI tools now let operatives produce not just fake names but fake professional histories that are culturally consistent - removing the "linguistic or cultural red flags" that previously helped HR teams detect fraud.

The European Pivot

GTIG's report is direct about the reasoning. "These factors have instigated a global expansion of IT worker operations, with a notable focus on Europe," it states, pointing to US indictments and employer awareness as the cause.

Workers are recruited for open roles through Upwork, Telegram, and Freelancer. False nationalities claimed by identified operatives include Italian, Japanese, Malaysian, Singaporean, Ukrainian, American, and Vietnamese. Laptop farms - physical arrays of devices managed by local facilitators who handle hardware for remote operators - have been confirmed operating in the UK.

North Korean IT workers at computers, photographed by US law enforcement US Department of Justice evidence photo showing North Korean IT workers at computers in an undisclosed location. Source: nbcnews.com

Who Is Exposed

Companies

The primary targets are technology companies, blockchain and AI project firms, defense contractors, and healthcare organizations. Amazon's chief security officer Stephen Schmidt disclosed in January 2026 that Amazon had blocked over 1,800 suspected DPRK operatives since April 2024. Mandiant's 2026 threat intelligence report puts the broader count at more than 3,000 suspected DPRK-affiliated workers currently inside Western companies, producing over $600 million annually for the regime.

European companies face higher exposure right now. They have weaker institutional awareness, less experience with the specific tells that US security teams have catalogued, and no equivalent to the FBI's sector-specific bulletins that put US firms on alert over the past two years.

HR Teams and Hiring Platforms

The attack surface is the hiring pipeline itself. Standard reference checks, single-round portfolio reviews, and one video call offer no reliable protection against an operator running multiple AI tools and a stolen identity package. Coinbase CISO Jeff Lunglhofer told reporters that companies now require multiple video interviews, technical presentations, and in-person contact before making offers - an escalation that most remote-first hiring processes aren't structured for.

Freelance platforms Upwork, Telegram, and Freelancer have become primary recruiting channels for DPRK operatives. None of these platforms currently has systematic controls for detecting state-sponsored identity fraud at the individual contractor level.

Governments and Regulators

StakeholderImpactTimeline
European tech companiesActive infiltration, IP theft, extortion riskNow
Freelance hiring platformsHigh exposure as onboarding channelsNow
Defense sectorClearance-adjacent risk, classified data proximityNow
EU/UK regulatorsNo specific deepfake hiring fraud guidance yet6-12 months
INTERPOL / national cyber agenciesCoordinating with US FBI and CISAOngoing

The January 2026 joint advisory from the FBI, CISA, and the US Treasury Department identified the Munitions Industry Department as the entity running the operation. That's the same DPRK body that oversees ballistic missile and weapons programs. This isn't a freelance criminal ring. It's defense ministry revenue generation.

Laptop farm network monitoring capture showing approximately 40 connected devices from a DPRK IT worker operation Network monitoring capture from a North Korean IT worker laptop farm operation, uncovered by Nisos investigators and shared with NBC News. Source: nbcnews.com

The Extortion Layer

The scheme added a second phase starting in October 2024. DPRK workers who are identified and fired have begun threatening employers with data leaks unless paid. Workers placed in sensitive developer roles - with access to production codebases, cloud infrastructure, or proprietary model weights - have used that access as a bargaining chip after removal. The combination of a long-term inside position and a data hostage changes the risk calculus for companies that might otherwise view the scheme as only a payroll fraud problem.

AI tools are already being used in cybercrime operations at scale, but the DPRK operation is unusual for its state backing, its operational maturity, and now its extortion component. OpenAI's own misuse reporting documented state actors using AI tools for influence and fraud - the North Korean scheme is what that looks like when it runs for years with national infrastructure behind it.

What Happens Next

The UK's National Cyber Security Centre and Germany's BSI have both been in contact with GTIG, according to The Record. Sector-specific advisories targeting the defense industrial base and AI startups with active hiring pipelines are expected in the coming months.

The practical pressure is on the hiring stack. Background check services without liveness detection, video interview platforms with no oddity flagging, and freelance marketplaces with weak identity verification are the gaps DPRK operatives exploit. Some identity providers, including Okta, have begun piloting continuous authentication rather than point-in-time verification at hiring - a structural shift that closes the deepfake interview window.

The EU AI Act doesn't currently cover the use of AI tools for identity fraud in hiring contexts. That gap will need a fix before European firms get the regulatory signal they need to treat this as a compliance obligation rather than a security curiosity.


Sources: Infosecurity Magazine - The Record - AI Commission - NBC News - PYMNTS

North Korea Targets Europe with AI Deepfake Workers
About the author AI Industry & Policy Reporter

Daniel is a tech reporter who covers the business side of artificial intelligence - funding rounds, corporate strategy, regulatory battles, and the power dynamics between the labs racing to build frontier models.