OpenAI Bans ChatGPT Accounts of Chinese Hackers Using AI for Malware Refinement and Phishing Attacks

OpenAI has banned multiple ChatGPT accounts tied to Chinese state-affiliated hackers, who used the AI to refine malware and generate phishing content, as detailed in the company's October 2025 report. Since February 2024, OpenAI disrupted over 40 policy-violating networks, noting that threat actors leverage AI for efficiency gains—like faster coding and better-targeted scams—rather than inventing new attack methods. The firm also flagged similar abuses by North Korean groups, though specifics focused on Chinese operations.


A prominent case involved the "Cyber Operation Phish and Scripts" cluster, operated by Chinese speakers aligned with PRC intelligence needs and overlapping with tracked groups UNKDROPPITCH and UTA0388. Hackers employed ChatGPT to debug malware tools like GOVERSHELL and HealthKick, explore automation with models like DeepSeek, and craft multilingual phishing emails targeting Taiwan's semiconductor industry, U.S. academics, and critics of the Chinese government.


Additional bans targeted Chinese entities using AI for surveillance, such as a "High-Risk Uyghur-Related Inflow Warning Model" analyzing travel data, social media probes scanning platforms like X, Facebook, and Reddit for "extremist" content, and research on government critics' funding. OpenAI disabled the accounts, shared indicators with partners, and highlighted how models often block direct malicious requests, limiting actors to benign code snippets—underscoring ongoing investments to curb AI misuse in cyber ops and influence campaigns.
NPAV offers a robust solution to combat cyber fraud. Protect yourself with our top-tier security product, Z Plus Security