Illustration showing a prohibition symbol over the ChatGPT logo, with the national flags of Russia, Iran, and China on the right side, representing banned accounts linked to hacking groups from these countries

OpenAI noted that the threat actors sought assistance from its models to debug a Go code snippet related to an HTTPS request and requested help with integrating the Telegram API. They also inquired about using PowerShell commands via Go to modify Windows Defender settings, specifically regarding adding antivirus exclusions.

Chinese Hacking Groups: APT5 and APT15 In addition to the Russian-speaking group, OpenAI also disabled accounts associated with two prominent Chinese hacking groups: APT5 (also known as Bronze Fleetwood, Keyhole Panda, Manganese, and UNC2630) and APT15 (also known as Flea, Nylon Typhoon, Playful Taurus, Royal APT, and Vixen Panda).

One subset of these Chinese threat actors engaged with the AI chatbot on topics related to open-source research into various entities of interest, technical subjects, and modifying scripts or troubleshooting system configurations. Another subset appeared to concentrate on development activities, including Linux system administration, software development, and infrastructure setup. They utilized OpenAI's models to troubleshoot configurations, modify software, and conduct research on implementation details.

This included requests for assistance in building software packages for offline deployment and advice on configuring firewalls and name servers. The threat actors were involved in both web and Android app development activities.

Moreover, the Chinese-linked groups weaponized ChatGPT to create a brute-force script capable of breaching FTP servers, conduct research on using large language models (LLMs) for automating penetration testing, and develop code to manage a fleet of Android devices for programmatically posting or liking content on social media platforms such as Facebook, Instagram, TikTok, and X.

Other Malicious Activity Clusters Leveraging ChatGPT Several other malicious activity clusters that exploited ChatGPT for nefarious purposes have also been identified:

North Korean IT Worker Scheme: A network consistent with this scheme used OpenAI's models to drive deceptive employment campaigns, creating materials that could advance their fraudulent attempts to apply for IT, software engineering, and other remote jobs globally.

Sneer Review: This likely China-origin activity utilized OpenAI's models to bulk-generate social media posts in English, Chinese, and Urdu on topics of geopolitical relevance for sharing on platforms like Facebook, Reddit, TikTok, and X. Operation High Five: Originating from the Philippines, this activity employed OpenAI's models to generate large volumes of short comments in English and Taglish on political and current events topics for sharing on Facebook and TikTok.

Operation Vague Focus: Another China-origin activity that used OpenAI's models to generate social media posts for sharing on X, posing as journalists and geopolitical analysts. This operation involved asking questions about computer network attack and exploitation tools and translating emails and messages from Chinese to English as part of suspected social engineering attempts.

Operation Helgoland Bite: Likely originating from Russia, this operation used OpenAI's models to generate Russian-language content about the German 2025 election, criticizing the U.S. and NATO for sharing on Telegram and X.

Operation Uncle Spam: This China-origin activity utilized OpenAI's models to generate polarized social media content supporting both sides of divisive topics within U.S. political discourse for sharing on Bluesky and X.

Storm-2035: An Iranian influence operation that used OpenAI's models to generate short comments in English and Spanish expressing support for Latino rights, Scottish independence, Irish reunification, and Palestinian rights. This operation praised Iran's military and diplomatic prowess for sharing on X by inauthentic accounts posing as residents of the U.S., U.K., Ireland, and Venezuela.

Operation Wrong Number: Likely related to Cambodian-origin task scam syndicates, this operation used OpenAI's models to generate short recruitment-style messages in multiple languages, advertising high salaries for trivial tasks such as liking social media posts.

OpenAI's proactive measures underscore the growing challenge of malicious actors exploiting advanced AI for nefarious ends. How do you think companies like OpenAI can further enhance their defenses against such sophisticated threats?