The Trump Arrest Deepfake Images (March 2023)
AI-generated images of Donald Trump being arrested circulated widely before his actual indictment. A case that demonstrated how AI images can preemptively frame narratives — and how weak the default platform defenses were.
- Incident date
- Mar 2023
- Target
- Donald Trump (subject, not target)
- Outcome
- No legal action; case study in preemptive-narrative deepfakes
On March 20, 2023, Bellingcat founder Eliot Higgins published a series of AI-generated images on Twitter depicting Donald Trump being arrested by NYPD officers. He prompted Midjourney v5 to produce the images, explicitly labeled them as synthetic, and shared them as a demonstration of the technology's capability.
Within 24 hours the images had been re-shared hundreds of thousands of times — most without Higgins' AI-generated label — and many viewers believed them to be real photographs of an actual arrest.
Trump was formally indicted 10 days later, on March 30, 2023. The fake images preceded the real event by a week and a half.
The preemptive-narrative dynamic
The Trump arrest deepfakes weren't produced as an attack. They were produced as a demonstration. But they illustrated a new dynamic:
- AI images can preemptively frame events that haven't happened yet but seem possible.
- The original authorial context (Higgins' explicit "this is AI" label) evaporated within two share-hops.
- Once an image looks like news imagery, it gets treated like news imagery.
- The later real event didn't "correct" the fakes — it blurred with them in viewer memory.
This isn't quite the same threat model as the Biden robocall (deceptive by design) or the Taylor Swift imagery (non-consensual content). It's closer to "imagery arrives before context, and context never catches up."
Detection signal
The images carry standard Midjourney-v5 fingerprints — diffusion model hash, color-palette correlations, and (in several frames) specific anatomical implausibilities. An image deepfake detector flags them with very high confidence.
What the case exposed was again the gap between detector capability and platform moderation velocity. Twitter (now X) did not run AI-image detection at scale in March 2023. By the time human moderators triaged reports, the images had reached audience scale.
Provenance would have helped
If the Midjourney-generated images had carried C2PA manifests, every downstream share that retained the metadata would have carried the attestation. At the time, Midjourney didn't embed C2PA. Starting in late 2023, Adobe Firefly and several other generators did. By 2026, industry-leading generators ship with C2PA by default.
The ongoing lesson
The Trump images and the subsequent real indictment together established that the sequence of real and synthetic events doesn't self-correct. Viewers who saw the fakes before the real story do not reliably update toward "the real one is the real one" — they often merge the two. This is the argument for strong, persistent provenance metadata that survives cross-platform sharing.