detect·deepfakesby Resemble AI
Glossary

Deepfake

Also: deep fake · deepfakes

AI-generated or AI-manipulated audio, image, or video content created with intent to deceive — typically by impersonating a real person, depicting an event that didn't happen, or presenting fabricated evidence.

The term "deepfake" combines deep learning (the kind of AI used to make them) and fake (the product). It was coined by a Reddit user in late 2017 and originally referred narrowly to face-swap videos. Today it's used more broadly for any AI-generated media presented deceptively.

The intent-to-deceive distinction

A piece of AI-generated content is a deepfake when two conditions are met:

  1. It was generated by AI (rather than captured or authored by a human).
  2. It's presented as real, or in a way a reasonable viewer would interpret as real.

An AI image used as magazine illustration, clearly labeled as such, is synthetic media — not a deepfake. The same image, unlabeled, captioned as "photo of X happening," becomes a deepfake.

Forms

  • Audio deepfakesvoice cloning, voice conversion. Used in CEO fraud, family-member scams, authentication bypass.
  • Image deepfakes — AI-generated photos (via diffusion or GANs) or manipulated real photos (face swap, inpainting). Used for non-consensual imagery, disinformation, fake identity documents.
  • Video deepfakesface swaps, lip-sync, reenactment, fully-synthetic video. Used for impersonation, misattributed quotes, election manipulation.
  • Text deepfakes (sometimes included) — AI-generated text presented as human-authored. The detection problem here is distinct from the audio/image/video stack.

Deepfake regulation is fragmented. Non-consensual intimate imagery is criminalized in a growing list of jurisdictions; election interference is restricted in others; commercial impersonation generally falls under existing fraud law. See our by-country regulation guide.

See also