Artifact
A visible, audible, or measurable flaw left behind by a generative model — typically imperceptible to casual observation but detectable by a trained model. The primary signal deepfake detectors rely on.
An artifact is what a generative model leaves behind that it didn't mean to — a fingerprint of the synthesis process. In early deepfakes, artifacts were obvious: mismatched jewelry, garbled text, extra fingers. In 2026, artifacts are mostly invisible to humans but still detectable by models trained to look for them.
Categories of artifact
Visual artifacts — frequency-domain hash patterns from diffusion denoising, upsampling grid patterns, impossible reflections, inconsistent shadows, boundary blending rings.
Audio artifacts — vocoder phase inconsistencies, spectral roll-off patterns, lack of natural room tone, unnaturally uniform sibilance.
Temporal artifacts (video) — per-frame identity drift in face swaps, blink-rhythm irregularities, head-pose cliffs at extreme angles.
Artifacts as an arms race
Each generation of generative models eliminates the most obvious artifacts from the previous generation:
- GAN-era (2017–2020) artifacts: checkerboard patterns from transposed convolution, specific color-channel correlations.
- Diffusion-era (2021–2025) artifacts: mid-frequency hash, upsampling grids.
- 2026 frontier artifacts: much subtler, mostly detectable only by statistical analysis over many samples.
Detection models chase the current artifact distribution. When a new generation of models removes an artifact, detectors retrain on new signatures. This is why update frequency matters — a detector trained on 2023 artifacts misses 2026 models.