detect·deepfakesby Resemble AI
Deepfake law · United States

Deepfake Law in the United States

There is no federal deepfake law. Regulation is fragmented across 20+ states, the FCC's robocall ban, and existing fraud statutes. The DEFIANCE Act passed the Senate in 2024; federal criminal legislation remains stalled.

Status
enacted
Jurisdiction
United States
Effective
Feb 2024
Statute
Fragmented (FCC TCPA, state laws, DEFIANCE Act)
RobocallsNon-consensual intimate imageryElection interference (some states)
Updated Apr 16, 2026 · 2 min read

The United States has no comprehensive federal deepfake law. Instead, regulation is layered across existing federal statutes (TCPA, wire fraud, CFAA), sector-specific FCC/FTC action, and a patchwork of state laws. The result is a fragmented framework that varies substantially by use case and jurisdiction.

Federal landscape

FCC (Federal Communications Commission). In February 2024, following the Biden robocall incident in New Hampshire, the FCC ruled that AI-generated voices in robocalls are illegal under the existing Telephone Consumer Protection Act. Penalties up to $6M in fines for operators. This is the most consequential federal action to date.

DEFIANCE Act (S.3696). Passed the Senate in July 2024, creating a federal civil cause of action for victims of non-consensual intimate deepfake imagery. Stalled in the House as of early 2026. Would let victims sue creators and redistributors directly.

Existing fraud statutes. Wire fraud (18 U.S.C. § 1343), identity theft (18 U.S.C. § 1028A), and securities fraud already apply to deepfake-enabled scams. The Arup-style CEO fraud (case study) is prosecutable under existing law if perpetrators are caught and within jurisdiction.

FTC. Has pursued deepfake advertising scams under unfair and deceptive practices rules. Proposed AI-endorsement rules went effective in 2024.

State landscape (20+ states as of 2026)

Most state deepfake laws focus on two categories:

Non-consensual intimate imagery. Texas, Virginia, New York, California, Minnesota, Georgia, Illinois, and many more criminalize creation or distribution of AI-generated sexually explicit imagery without consent.

Election interference. California, Texas, Michigan, Washington, Minnesota, and others restrict deepfakes depicting candidates within specific pre-election windows, with varying liability and enforcement.

Penalties vary: California's election-deepfake law carries civil penalties and injunctive relief. Texas makes election deepfakes a criminal offense. Several states (Tennessee's ELVIS Act) extend right-of-publicity protections to cover AI voice clones.

Practical implications

For US-based organizations:

  • Anyone operating robocalls: AI voices are illegal. Mandatory screening at the carrier level now standard.
  • Platforms hosting user content: non-consensual deepfake imagery creates civil liability; states with laws on the books are increasingly used in lawsuits.
  • Political campaigns: state-by-state compliance review is required, and "safe-harbor" labeling conventions vary.
  • Financial institutions: CEO-fraud by deepfake is prosecutable under existing law but detection-and-prevention is your responsibility — regulators expect it.

What's pending

  • Federal criminal legislation on non-consensual deepfakes (NO FAKES Act, DEFIANCE companion bills) — various bills in Congress.
  • Broader federal AI disclosure requirements following EU AI Act model — debated but not imminent as of early 2026.

Sources