Google Gemini AI hijacking

In a concerning demonstration, security researchers showcased how they could hijack Google's AI bot, Gemini, to control smart home functions using invisible prompts.

The attack began with a poisoned Google Calendar invitation containing hidden prompt injections that instructed the AI to perform malicious actions.

Google Gemini AI hijackingGoogle Gemini AI hijacking

These prompts were embedded in the titles of the calendar invites in plain English, making them accessible to anyone without technical expertise.

When the researchers asked Gemini to summarize upcoming events, it activated the hidden commands, allowing control over various smart home devices, including lights, shutters, and even a connected boiler.

Google Gemini AI hijackingGoogle Gemini AI hijacking

This incident highlights the potential physical risks associated with generative AI systems as they become more integrated into daily life. Ben Nassi, a researcher from Tel Aviv University, emphasized the need to secure large language models (LLMs) before they are incorporated into autonomous machines, where safety could be at stake.