Detect Deepfakesby Resemble AI
Deepfake case study · Audio

The growing challenge of knowing who's really behind the API calls on AWS

A late-night AWS deployment turns into a security nightmare as suspicious activity and a deepfake attack lead to unexpected costs and service disruption.

Incident date
May 2026
Target
AWS user
Updated May 8, 2026 · 1 min read

As companies move more workloads to the cloud, new security challenges emerge. One such challenge involves identifying who is truly behind API calls on AWS. An AWS user experienced a scenario where suspicious activity, failed logins, and unauthorized resource provisioning occurred, followed by an automated alert and a suspicious voice on a follow-up call.

What happened

The user's monitoring system showed multiple failed logins, unusual IAM role assumptions, and attempts to spin up expensive instances in a new region. The user received an automated alert, but the voice on the follow-up call sounded "off." By the time the user realized it was a deepfake or cloned-voice social engineering attack, costs had already spiked. The incident highlights the growing problem of distinguishing real users from bots and compromised accounts, especially as companies deploy autonomous AI agents that can provision resources and manage infrastructure.

Sources