detect·deepfakesby Resemble AI
Deepfake case study · Video

The Obama/Peele Deepfake PSA (April 2018)

Jordan Peele and BuzzFeed released an early deepfake PSA depicting President Obama saying things he never said — to teach the public what was coming. Still the most-cited educational deepfake.

Incident date
Apr 2018
Target
n/a (educational PSA with consent)
Outcome
Over 150M views; enduring touchstone for deepfake public education
Updated Apr 16, 2026 · 2 min read

In April 2018, Jordan Peele and BuzzFeed released a video in which President Obama appeared to call Donald Trump a "dipshit" and give startlingly candid political commentary. Halfway through, the camera pulled out to reveal Peele himself, explaining that the video was a face-swap deepfake demonstration.

The PSA was explicitly educational — not an attack. It remains the most-cited single deepfake in public-awareness curricula, eight years later.

Technical context

The production was state-of-the-art for 2018:

  • Face swap using a DeepFaceLab-like pipeline.
  • Voice — Peele's actual voice, lightly EQ'd toward Obama's vocal character, not cloned.
  • Post-production — careful lighting and compositing to mask the boundary artifacts of 2018-era face-swap pipelines.

By 2026 standards, the quality is dated — visible boundary blending, occasional identity drift, limited pose range. At the time it was cutting-edge.

Why it endures as an educational artifact

The Obama/Peele video works because it's:

  1. Self-disclosing. It shows you how the trick works, which makes it memorable.
  2. High-stakes framing. Using a sitting (recently departed) US president made the political implications immediately clear.
  3. Celebrity amplification. Peele's name carried the video beyond tech and politics audiences to entertainment audiences.
  4. Replicable. University journalism programs, fact-checking workshops, and AI-literacy courses have used it as the canonical "what a deepfake is" example for years.

The subsequent deepfake-awareness gap

Eight years later, the detection problem the video raised hasn't gotten easier:

  • Production quality has gone from DeepFaceLab-2018 to real-time face reenactment pipelines.
  • Target cost has dropped from specialized ML expertise to a consumer credit card.
  • Visual human-detection accuracy hasn't improved — if anything it's gotten harder as models improve.

Detection technology has kept pace at the high end — Resemble AI's detectors reach 96.7% across modalities — but public awareness of capability has lagged behind capability itself.

The enduring lesson

The video's implicit argument — you need to know this is possible before you see it in the wild — is still the core challenge of deepfake response. The Pope puffer jacket and the Taylor Swift incidents both demonstrated that viewers without prior awareness defaulted to "it's probably real." Pre-awareness is the foundation on which any detection + response pipeline sits.

Sources