TLDRs;
- OpenAI banned accounts linked to China and Russia for using ChatGPT in surveillance and cybercrime operations.
- China-linked users sought to design social media monitoring tools for government clients.
- Russian-speaking groups misused ChatGPT to aid malware and misinformation campaigns.
- OpenAI has disrupted over 40 malicious networks since launching its threat report system in 2024.
OpenAI has banned several ChatGPT accounts believed to be connected to Chinese and Russian entities after investigations revealed attempts to misuse the platform for surveillance, phishing, and malware development.
The decision was detailed in the company’s latest Threat Report, published this week, outlining how state-linked and criminal actors have tried to exploit large language models for unethical purposes.
The banned Chinese accounts reportedly sought assistance from ChatGPT to conceptualize social media “listening tools” that could analyze conversations across X, Facebook, Instagram, Reddit, TikTok, and YouTube. These tools were allegedly meant to monitor what the users described as “extremist,” political, and religious content, activities that fall under OpenAI’s national security and misuse policies.
While none of the banned users appear to have used ChatGPT to perform direct surveillance, the company said the requests violated its terms.
“These users asked ChatGPT to create plans or documentation for AI-powered monitoring tools, but did not proceed to execute them,” explained Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team.
Suspected State-Linked Operations
In one instance, a suspected Chinese operator, using a VPN to disguise their location, asked ChatGPT to create promotional materials and project plans for a government-backed social media probe. Another user sought information about critics of the Chinese government on X, while a third attempted to identify petition organizers in Mongolia.
OpenAI noted that its models only provided publicly available information and refused to generate any personally identifying data. However, the cases underscore how authoritarian regimes may attempt to manipulate AI to strengthen surveillance capabilities.
“These incidents offer a limited snapshot of how future abuses could evolve,” Nimmo added. “They show the direction of travel,even if the final destination remains unclear.”
Since launching public threat reporting in February 2024, OpenAI has dismantled more than 40 malicious networks using its systems for harmful or unlawful purposes.
Russian Cybercriminal Activity
OpenAI’s report also revealed bans targeting Russian-speaking groups who attempted to use ChatGPT for cyberattacks and influence operations. One group, tied to a network dubbed Stop News, used ChatGPT to draft video scripts for propaganda content before turning to other AI models to produce the videos distributed across YouTube and TikTok.
“We’re seeing adversaries routinely jump between models for minor gains in speed or automation,” said Michael Flossman, head of OpenAI’s threat intelligence division.
In a more severe case, OpenAI blocked accounts allegedly tied to Russian cybercriminal gangs that used ChatGPT to develop and refine malware. These users reportedly attempted to build remote access trojans, credential stealers, and clipboard hijacking tools. The report detailed that they used the model to generate code for evading antivirus detection, decrypting wallet data, and performing browser credential extraction.
OpenAI confirmed that its systems automatically rejected clearly malicious prompts but emphasized that the company continues to monitor coordinated attempts to misuse generative AI for hacking or espionage.
Strengthening AI Safeguards
OpenAI stated that its safeguards remain effective and that its models have not provided new offensive capabilities to bad actors. Instead, threat actors appear to be using generative AI to streamline existing workflows or enhance their propaganda efforts.
The company reaffirmed its commitment to transparency through ongoing publication of threat reports and partnerships with global cybersecurity agencies.
“We’re determined to ensure that AI serves users safely and ethically,” Flossman said.
As AI systems become more sophisticated, experts warn that misuse risks will evolve just as quickly. OpenAI’s proactive detection and enforcement efforts could serve as a blueprint for how the industry can protect both innovation and public trust in the face of growing digital threats.