Every deepfake attack we could verify.
A curated, cited database of the most consequential deepfake incidents on record — CEO voice-clone fraud, election robocalls, synthetic celebrity imagery, lip-sync disinformation. Each entry walks through the attack, the detection signal, and the outcome. Free to cite.
2024 · 5 incidents
- audio
The Ferrari CEO Deepfake Call Attempt (July 2024)
An attacker tried to impersonate Ferrari CEO Benedetto Vigna on a WhatsApp call to a senior executive. The executive asked a question only the real Vigna would know the answer to — and the call ended immediately.
Target · Ferrari (Ferrari N.V.) - mixed
The WPP CEO Deepfake Attempt (May 2024)
Attackers cloned the voice and face of WPP CEO Mark Read to try to launch a fake company on a WhatsApp video call. The attempt was caught — a rare success story and a playbook for what verification looks like in 2026.
Target · WPP (world's largest advertising group) - image
The Taylor Swift Deepfake Image Incident (January 2024)
Non-consensual AI-generated imagery of Taylor Swift spread on X, reaching 47 million views before platform intervention. The inflection point for non-consensual deepfake legislation in the US.
Target · Taylor Swift - audio
The Biden Deepfake Robocall (New Hampshire, 2024)
Cloned audio of President Biden told New Hampshire Democrats not to vote in the primary. The first high-profile AI robocall of the 2024 US election cycle and the case that triggered FCC action on AI voice calls.
Target · New Hampshire voters / Biden campaign - mixed
The $25.6M Arup Deepfake Fraud (Hong Kong, 2024)
A finance employee at engineering firm Arup wired $25.6M after a multi-person video call where every participant except them was a deepfake. The case that made deepfake video-call fraud mainstream.
Target · Arup (global engineering firm)
2023 · 3 incidents
- image
The Trump Arrest Deepfake Images (March 2023)
AI-generated images of Donald Trump being arrested circulated widely before his actual indictment. A case that demonstrated how AI images can preemptively frame narratives — and how weak the default platform defenses were.
Target · Donald Trump (subject, not target) - image
The Pope in a Puffer Jacket (March 2023)
An AI-generated image of Pope Francis wearing a Balenciaga-style puffer jacket went viral. Not an attack in any fraud sense — but the case that taught millions of people to assume viral imagery might be synthetic.
Target · Pope Francis (subject, not target) - audio
The Slovakia Election Audio Deepfake (September 2023)
Cloned audio of the leading Progressive Slovakia candidate, released 48 hours before polls opened, fabricated a conversation about rigging the election. The case that proved late-cycle audio deepfakes can influence results.
Target · Michal Šimečka (Progressive Slovakia)
Journalists and researchers: cite us freely.
The Deepfake Incident Database is a public, continuously-updated resource maintained by Resemble AI. Every entry has primary-source citations. Link directly to incident pages — we keep URLs stable.
https://www.detectdeepfakes.com/examples, accessed May 2026.