Detect Deepfakesby Resemble AI
Deepfake case study · Image

Google Gemini CLI, Vertex AI Studio (with Gemini backend), Gemini's web interface, Gemini's API…

Researchers at Trail of Bits developed an attack that injects malicious prompts in images. When users upload images, these are downscaled which reveals hidden text in the image, which the AI model interprets as user…

Incident date
Aug 2025
Target
Google Gemini CLI, Vertex AI Studio (with Gemini backend), Gemini's web interface, Gemini's API via the llm CLI, Google Assistant on an Android phone, Genspark
Updated May 6, 2026 · 1 min read

Researchers at Trail of Bits developed an attack that injects malicious prompts in images. When users upload images, these are downscaled which reveals hidden text in the image, which the AI model interprets as user instructions. This can lead to data leakage. They were able to exfiltrate Google Calendar data in an example.

Sources