Presentation Attack
An attempt to defeat a biometric authentication system by presenting a fabricated input — a printed photo, replayed voice recording, silicone mask, or deepfake — to the sensor in place of the genuine live user.
A presentation attack is any attempt to defeat a biometric sensor by presenting something other than the genuine live user. The term comes from the biometrics-standards community (ISO/IEC 30107-1).
The attack ladder
Presentation attacks range in sophistication:
- Print attack. Hold a photo of the target in front of a face-recognition camera.
- Screen replay. Show a video of the target on a phone or monitor.
- Voice replay. Play back a recording of the target's voice.
- Silicone mask. Wear a lifelike mask of the target's face.
- Deepfake video injection. Feed a synthetic video feed (face reenactment) into the camera input via a virtual camera driver.
- Real-time deepfake. Pair a face-swap driver with a cloned voice to fool a video-call verification.
Early attack levels are trivially defeated by basic depth and motion detection. Later levels — especially deepfake injection — require sophisticated defenses.
The defense: liveness detection
Liveness detection is the umbrella term for defenses against presentation attacks. Approaches range from:
- Active challenges (blink, smile, say this phrase).
- Passive signals (depth cues, micro-motion, camera-noise patterns).
- Hardware sensors (structured light, time-of-flight, multi-spectral).
Modern biometric systems combine liveness detection with deepfake detection because the two defend against different layers of the attack ladder.