Illustration of AI-powered ransomware PromptLock exploiting OpenAI’s gpt-oss:20b model for advanced cyber attacks

Researchers at Trail of Bits discovered a new “image scaling attack” where high-resolution images hide malicious instructions that only appear when AI systems automatically downscale them. This allows attackers to inject commands unnoticed by users.

Illustration of AI-powered ransomware PromptLock exploiting OpenAI’s gpt-oss:20b model for advanced cyber attacksIllustration of AI-powered ransomware PromptLock exploiting OpenAI’s gpt-oss:20b model for advanced cyber attacks

In tests, platforms like Google’s Gemini and Google Assistant were tricked into actions such as accessing calendars and emailing data without consent. The vulnerability stems from AI’s image resizing process, not its reasoning

Illustration of AI-powered ransomware PromptLock exploiting OpenAI’s gpt-oss:20b model for advanced cyber attacksIllustration of AI-powered ransomware PromptLock exploiting OpenAI’s gpt-oss:20b model for advanced cyber attacks

To combat this, researchers created a tool called Anamorpher to help test AI defenses against such hidden prompts. Experts advise AI systems to avoid auto-executing commands from images and require user approval before performing sensitive actions.

As AI use grows, protecting against these subtle input manipulations is critical to safeguard user data and privacy.


NPAV offers a robust solution to combat cyber fraud. Protect yourself with our top-tier security product, Z Plus Security