detect·deepfakesby Resemble AI
Deepfake case study · Image

The Pope in a Puffer Jacket (March 2023)

An AI-generated image of Pope Francis wearing a Balenciaga-style puffer jacket went viral. Not an attack in any fraud sense — but the case that taught millions of people to assume viral imagery might be synthetic.

Incident date
Mar 2023
Target
Pope Francis (subject, not target)
Outcome
No material harm; major cultural moment for AI-image literacy
Updated Apr 16, 2026 · 2 min read

On March 25, 2023, an AI-generated image of Pope Francis wearing an oversized white Balenciaga-style puffer jacket went viral on Reddit, Twitter, and Instagram. Within 48 hours, tens of millions of people had seen it, many believing it to be a real photograph of a surprisingly fashionable pontiff.

It was the moment that AI-generated imagery crossed into mainstream public consciousness — not through an attack, but through a joke that most people couldn't tell was AI.

Why this case matters

The Pope image wasn't a fraud, a disinformation campaign, or an attack on the Pope. It was created by a Midjourney user for fun. But the cultural moment it crystallized was significant:

  • Diffusion-era quality had reached the point where casual, non-targeted prompts produced photographs convincing enough to fool most viewers on a phone screen.
  • Platform provenance loss was starkly visible — the image's original AI-generated context was obvious on Reddit's /r/midjourney, but survived for about three hops of sharing before being lost.
  • Public awareness of AI-image capability shifted overnight. "Wait, that could be AI?" became a mainstream reaction.

Detection signal

The image is a straightforward detection target. Its Midjourney-v5 fingerprint is well-characterized — diffusion-model signature in mid-frequency hash, characteristic color palette, familiar upsampling grid pattern. An image deepfake detector would flag it with very high confidence.

What was interesting wasn't detection — it was where detection lived in the pipeline. No major platform in March 2023 ran AI-image detection on viral content at scale. The image was debunked through social reporting and journalist analysis, not automated moderation. Several platforms began deploying detection in the months following.

The provenance lesson

The Pope case is often cited as the canonical argument for C2PA adoption. Had the original image carried a C2PA manifest attesting to Midjourney authorship, every downstream share that preserved the metadata would carry that attestation. Viewers on any platform that surfaced C2PA data would see "AI-generated — Midjourney" next to the image.

Instead, metadata was stripped on re-upload, context was lost in cross-platform sharing, and millions saw a "photograph" with no reliable signal about its origin.

The cultural legacy

The Pope puffer jacket is now the default reference example in articles explaining what AI images can do. It appears in university journalism curricula, fact-checking training programs, and government AI-literacy materials. Its distinguishing feature — that it was essentially harmless — is what made it durable as an educational example.

Serious deepfake harm teaches hard lessons. A pope in a puffer jacket teaches the same lessons with a smile.

Sources