OpenAI's New 37-Page Threat Report Details How Bad Actors Weaponize ChatGPT
OpenAI's February 2026 threat report documents romance scam rings, North Korean crypto hackers, state-backed phishing from China and Iran, and political influence campaigns spanning six countries - all powered by ChatGPT.

Hundreds of victims per month. That's how many people a single Cambodia-based romance scam operation was defrauding using ChatGPT to create fake dating profiles, promotional materials, and conversational scripts - all targeting Indonesian men interested in luxury lifestyle content.
TL;DR
- 4 nation-states documented using ChatGPT for operations: China, Russia, North Korea, Iran
- Hundreds of victims/month defrauded by a single AI-powered romance scam ring in Cambodia
- Thousands of fake accounts rolled out by Chinese law enforcement in a sustained influence campaign
- 6+ countries targeted by simultaneous political influence operations from a single operator
- Key finding: AI-created content wasn't the decisive factor in campaign success - traditional tradecraft still matters more
OpenAI published its latest threat intelligence report on February 25, documenting how criminal organizations and state-backed actors are using ChatGPT as an accelerant for operations that predate AI completely. The 37-page report covers romance fraud, political influence campaigns, phishing at scale, and a growing trend of organized "scam-as-a-service" operations.
The Full Picture
| Threat Actor | Origin | ChatGPT Use | Targets |
|---|---|---|---|
| Romance scam ring | Cambodia | Fake dating profiles, scripts, chatbot integration | Indonesian men |
| Chinese law enforcement | China | Surveillance targets, phishing emails, face-swap tools | US officials, analysts, dissidents |
| "Nine emdash Line" | China | English/Cantonese social posts | Philippines, Vietnam, Hong Kong activists |
| Korean-language operators | North Korea | Code debugging, credential theft, crypto phishing | Cryptocurrency platforms |
| "Stop News" | Russia | Video scripts, social media content | Western audiences on YouTube/TikTok |
| Russian coders | Russia | Remote access tools, credential stealers | General targets |
| Scam centers | Cambodia, Myanmar, Nigeria | Translation, fake investment pitches, logistics | Global victims |
| Fake law firms | Unknown | Impersonation of US attorneys | Fraud victims |
What the Numbers Say
State-Backed Espionage Goes Commodity
The Chinese law enforcement operation is the most detailed case in the report. A small set of accounts originating in China used ChatGPT to request information about US persons, online forums, and federal building locations. They produced phishing emails targeting state-level US officials and policy analysts, inviting participation in fabricated paid consultation programs.
The same operation rolled out what OpenAI describes as "large-scale, resource-intensive and sustained" tactics involving hundreds of staff and thousands of fake accounts across social media platforms. These included filing bogus complaints to silence dissident accounts, mass-posting coordinated content, forging documents, and impersonating US officials to intimidate critics. When ChatGPT refused to help plan propaganda against Japanese Prime Minister Sanae Takaichi, they moved to other platforms and Chinese AI models including DeepSeek.
Romance Fraud at Industrial Scale
The Cambodia-based dating scam combined manual ChatGPT prompting with an automated AI chatbot to create a hybrid human-AI fraud pipeline. The operation created logos for fake high-end dating services, produced images of fictitious women, and crafted conversational scripts that pressured victims into high-payment "tasks." OpenAI estimated the ring was defrauding hundreds of victims monthly.
Separate scam centers across Cambodia, Myanmar, and Nigeria used ChatGPT to translate messages, write fake investment pitches, and manage day-to-day logistics for large-scale fraud operations - effectively building scam-as-a-service infrastructure where AI handles the localization and scripting.
North Korean Crypto Operations
Korean-language operators used ChatGPT as a development tool: debugging code, building credential theft routines, and drafting phishing messages targeting cryptocurrency platforms. The report describes structured team-based workflows where ChatGPT served as a research and development assistant for attacks on crypto exchanges and blockchain infrastructure.
What the Numbers Don't Say
The report's most significant finding is also its most easily missed: "AI-generated content did not appear to be the decisive factor in whether a campaign was successful." Targeted advertising and established social media accounts proved more influential than AI-produced content in determining campaign outcomes.
This matters for calibrating the threat. ChatGPT is making existing criminal and espionage operations faster and cheaper, but it isn't creating new categories of attack. Romance scams predate AI by decades. State-backed phishing has been a fixture of cyber operations since the early 2000s. Political influence campaigns are as old as social media itself.
What AI changes is the economics. A single operator can now run influence campaigns across six countries simultaneously on Facebook, Instagram, and X. A fraud ring in Cambodia can produce localized scripts in Japanese, Indonesian, and English without hiring native speakers. A North Korean team can debug exploitation code without deep programming expertise. The barrier to entry drops, the volume goes up, and the operations get harder to distinguish from legitimate content.
The report also has inherent selection bias. OpenAI can only report on misuse it detects and disrupts on its own platform. Threat actors using open-source models like Llama or DeepSeek, or running local inference, are invisible to this methodology. The Chinese operators explicitly pivoted away from ChatGPT when it refused requests - the misuse simply moved elsewhere.
"Threat activity is seldom limited to one platform; threat actors may use different AI models at various points in their operational workflow."
- OpenAI Threat Intelligence Report, February 2026
So What?
For policymakers, this report strengthens the case for cross-platform intelligence sharing. OpenAI catching a fraud ring does nothing if the same actors rebuild on a different model the next day. The defense needs to be at the platform level (social media, financial services, messaging apps), not just the model level.
For the AI safety community, the report verifies that guardrails work - ChatGPT refused the anti-Takaichi campaign, and the refusal forced the actors to find alternatives. But it also shows the limits of guardrails in a multi-model world. If one model refuses, another will comply.
For everyone else: none of this is new. It's the same fraud, espionage, and influence playbook that has existed for years. It is just 10x faster now, and it's only going to get more sophisticated and harder to detect. Treat AI-generated messages with the same skepticism you should already be applying to unsolicited contact from strangers on the internet.
Sources:
- Disrupting Malicious Uses of AI - OpenAI
- From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse - US News
- Chinese Group's ChatGPT Use Reveals Worldwide Harassment Campaign - CyberScoop
- OpenAI Finds Growing Exploitation of AI Tools by Foreign Threat Groups - Hackread
- OpenAI Intelligence Report Identifies New Tactics in AI-Enhanced Scams - PYMNTS
- ChatGPT Misuse: OpenAI Bans Scam, Fake Lawyer, China-Linked Accounts - Deccan Herald
