The WPP CEO Deepfake Attempt (May 2024)
Attackers cloned the voice and face of WPP CEO Mark Read to try to launch a fake company on a WhatsApp video call. The attempt was caught — a rare success story and a playbook for what verification looks like in 2026.
- Incident date
- May 2024
- Target
- WPP (world's largest advertising group)
- Outcome
- Attack foiled by internal verification; no funds lost
On May 9, 2024, a senior WPP executive received a WhatsApp message purportedly from CEO Mark Read requesting a video call about a confidential new business venture. The caller looked and sounded exactly like Read. Small visual inconsistencies during the call — lip-sync drift, pose-limit artifacts when Read's "face" turned — tripped the executive's suspicion, and they insisted on WPP's standard verification process. The call was not followed through.
It's the rare case where the defender won cleanly.
Why this case matters
Most public deepfake incidents are failures — losses suffered, detection after the fact. WPP's case documented, in detail, how a combination of visual skepticism + procedural verification foiled an attack that the Arup $25.6M fraud had succeeded at just three months earlier.
The attack stack
Based on Read's own post-incident description:
- Voice cloning trained on public Read audio (interviews, conferences, YouTube).
- Face reenactment or pre-rendered face swap on the video call.
- Spoofed WhatsApp photo/display name matching Read's real account.
- Social engineering — confidential new-venture framing to discourage verification.
The production quality was high enough to pass Read's own colleagues' initial visual check. What stopped the attack wasn't detection — it was policy.
The successful defense
WPP had in place:
- Written verification policy requiring callback on a known-good channel for any financial commitment during a video call.
- Cultural norm that following the verification policy wasn't seen as distrust — it was the default.
- Employee training that framed "it sounds real" as not sufficient.
The targeted executive's visual skepticism was a useful flag, but it wasn't the primary defense. The primary defense was the policy that triggered verification regardless of visual conviction.
Generalizable playbook
For any firm concerned about this attack class, the pattern that worked at WPP:
- Define verification channels in advance — specific numbers, apps, or procedures for authorizing any transfer or commitment.
- Make verification cultural, not optional. "Of course I trust you, and of course I'll still follow the process."
- Normalize pauses. Most deepfake attacks rely on time pressure. A 15-minute "let me verify" kills most of them.
- Pair policy with deepfake detection on recorded artifacts where available — not as the primary defense, but as a secondary layer.
What detection would have added
WPP didn't use real-time video deepfake detection on the call itself — the capability was immature in May 2024 and remains emerging in 2026. Had detection been available on the call, it would have been a confirmatory signal. But the procedural defense worked without it.
The lesson: don't wait for real-time detection before putting verification policy in place. The policy is the control; detection is the assist.