AI router attack enabling malicious code injection and sensitive data theft in AI systems

AI router vulnerabilities have exposed a critical security gap in the AI ecosystem, where third-party API routers can be abused to intercept and manipulate AI agent operations. These routers act as intermediaries between AI clients and providers like OpenAI and Anthropic, handling sensitive data and executing commands. However, researchers found that they can silently modify requests and responses, enabling attackers to inject malicious code, steal credentials, and hijack AI workflows without detection.

AI router attack enabling malicious code injection and sensitive data theft in AI systemsAI router attack enabling malicious code injection and sensitive data theft in AI systems

The study revealed that several routers actively performed malicious actions, including injecting harmful payloads, abusing stolen API keys, and even draining cryptocurrency wallets. Because these routers operate with full access to unencrypted request data and lack integrity verification, attackers can alter tool calls while keeping them technically valid. This allows them to bypass traditional security checks and execute unauthorized commands on target systems.

AI router attack enabling malicious code injection and sensitive data theft in AI systemsAI router attack enabling malicious code injection and sensitive data theft in AI systems

Experts warn that this poses a major risk as AI agents are increasingly used for critical operations like cloud management and financial transactions. Organizations are advised to treat third-party routers as untrusted, enforce strict validation policies, and monitor AI behavior for anomalies. Until stronger protections like cryptographic response verification are implemented, securing the AI supply chain remains essential to prevent large-scale attacks.


NPAV offers a robust solution to combat cyber fraud. Protect yourself with our top-tier security product, FraudProtector.net