detect·deepfakesby Resemble AI
Glossary

Presentation Attack

Also: PA · spoofing attack · biometric spoofing

An attempt to defeat a biometric authentication system by presenting a fabricated input — a printed photo, replayed voice recording, silicone mask, or deepfake — to the sensor in place of the genuine live user.

A presentation attack is any attempt to defeat a biometric sensor by presenting something other than the genuine live user. The term comes from the biometrics-standards community (ISO/IEC 30107-1).

The attack ladder

Presentation attacks range in sophistication:

  1. Print attack. Hold a photo of the target in front of a face-recognition camera.
  2. Screen replay. Show a video of the target on a phone or monitor.
  3. Voice replay. Play back a recording of the target's voice.
  4. Silicone mask. Wear a lifelike mask of the target's face.
  5. Deepfake video injection. Feed a synthetic video feed (face reenactment) into the camera input via a virtual camera driver.
  6. Real-time deepfake. Pair a face-swap driver with a cloned voice to fool a video-call verification.

Early attack levels are trivially defeated by basic depth and motion detection. Later levels — especially deepfake injection — require sophisticated defenses.

The defense: liveness detection

Liveness detection is the umbrella term for defenses against presentation attacks. Approaches range from:

  • Active challenges (blink, smile, say this phrase).
  • Passive signals (depth cues, micro-motion, camera-noise patterns).
  • Hardware sensors (structured light, time-of-flight, multi-spectral).

Modern biometric systems combine liveness detection with deepfake detection because the two defend against different layers of the attack ladder.

See also