detect·deepfakesby Resemble AI
Image detection

AI Image & Deepfake Photo Detector

Upload a photo or paste a URL. Find out if it was generated by Midjourney, DALL·E, Stable Diffusion, Sora, or face-swapped — with the reasoning behind every verdict.

How our AI image detector works

Three independent signals feed the verdict. Diffusion fingerprinting: each generator family (Midjourney, DALL·E, Stable Diffusion, FLUX, Imagen) leaves a characteristic mid-frequency hash pattern from its denoising pipeline — we match against a library. Sensor-noise correlation:real cameras produce per-channel noise correlations from their Bayer filter that generative models don't reproduce. Compression history: JPEG block boundaries reveal how many save roundtrips an image has been through — real photos usually show one; AI images usually show the signature of a generation → compress → redistribute chain.

Resemble Intelligence turns the model's internal signals into plain-English reasoning per region of the image. A score tells you a verdict; the reasoning tells you why that verdict held up.

What this detector catches

  • Diffusion-generated images from Midjourney v3–v7, DALL·E 2 and 3, Stable Diffusion 1.5 / 2 / XL, FLUX.1, Imagen 2 / 3, Firefly 1 / 2.
  • Face swaps and targeted manipulation (DeepFaceLab, FaceFusion, Roop).
  • Inpainted regions inside otherwise-real photos.
  • Sora stills and other text-to-video-frame generations.
  • Photos of a screen showing an AI image (a growing attack vector we explicitly train for).

Limitations

  • Heavily upscaled AI images (4–8x) can lose the mid-frequency hash.
  • Small inpainted regions in otherwise-real photos may average out in a whole-image verdict — for those, consider region-level analysis via the API.
  • Screenshots of AI images re-compressed multiple times may drop detection confidence below 0.7.

Common use cases

  • Insurance claims teams: AI-generated damage photos for auto and property fraud.
  • Dating apps: catfish profile verification at signup.
  • Journalism: verifying viral “photographs” of events.
  • E-commerce: product-photo authenticity for marketplaces.
  • Trust & safety: platform-wide non-consensual imagery detection.

Frequently asked questions

+Which generators does this detector catch?

Midjourney (up to v7), DALL·E 3, Stable Diffusion XL + community checkpoints, FLUX.1, Google Imagen 3, Adobe Firefly, Ideogram, Recraft, Krea, Sora stills, plus face-swap pipelines (DeepFaceLab / FaceFusion / Roop).

+Does C2PA help?

Yes, where present — we read the C2PA manifest and surface the signing chain. Absence isn’t evidence of fakery, but a verified C2PA signature from a known-good signer is a strong real signal.

+What about PerTH watermarks?

If an image was generated by a Resemble-integrated system that embedded a PerTH watermark, we detect and surface it in the result. Useful for provenance audits.

+Does heavy compression break detection?

Yes — images that have been re-compressed aggressively (Instagram → Discord → screenshot → Twitter) can lose the fingerprints our model needs. We still give a verdict but the confidence drops.

+Is my image stored?

No. Processed in memory, deleted within 10 minutes, never used for training.

Keep reading