detect·deepfakesby Resemble AI
Deepfake law · Canada

Deepfake Law in Canada

Canada has no dedicated federal deepfake statute. Regulation occurs through existing Criminal Code provisions, PIPEDA privacy law, and the pending Artificial Intelligence and Data Act (AIDA). Provincial laws fill some gaps.

Status
enacted
Jurisdiction
Canada
Effective
Mar 2015
Statute
Criminal Code sections 162.1, 264, 372 + pending AIDA
Non-consensual intimate imageryCriminal harassmentFraudPending: AI governance (AIDA)
Updated Apr 16, 2026 · 2 min read

Canada regulates deepfakes through a patchwork of existing Criminal Code provisions, provincial privacy law, and the pending federal Artificial Intelligence and Data Act (AIDA). A dedicated federal deepfake statute has not been passed.

Key provisions

Criminal Code s. 162.1 — Publication of intimate images without consent. Since 2015. Applied to AI-generated imagery in several 2023–2025 prosecutions, with courts interpreting the provision to cover deepfake imagery of identifiable persons. Penalties up to five years.

Criminal Code s. 264 — Criminal harassment. Deepfake-enabled harassment campaigns prosecuted under this section. Up to ten years for aggravated cases.

Criminal Code s. 372 — False messages / impersonation. Covers deepfake-enabled impersonation for fraud purposes.

AIDA (Artificial Intelligence and Data Act), pending. Would establish federal AI governance including transparency obligations, risk management requirements, and penalty structure. Passed Parliament first reading; enactment expected 2026–2027. Includes deepfake-relevant disclosure provisions aligned broadly with EU AI Act Article 50.

PIPEDA (Personal Information Protection and Electronic Documents Act). Federal privacy law covers biometric data including voice and facial features used in deepfakes.

Provincial layers

  • Quebec: Law 25 (biometric data provisions); Civil Code right to privacy doctrine provides strong civil remedies for image misuse.
  • British Columbia, Manitoba, Nova Scotia, PEI: intimate image protection acts with civil remedies.
  • Alberta, Ontario: tort of invasion of privacy developed through common law.

Enforcement context

The Canadian Centre for Child Protection and provincial police forces have been active on AI-generated CSAM cases. Criminal prosecutions of adult non-consensual deepfake cases have increased through 2024–2025.

Practical implications

For organizations operating in Canada:

  • AI service providers: AIDA passage will materially change compliance obligations. Current environment is less regulated than EU/UK.
  • Platforms: Criminal Code obligations plus provincial laws require content-moderation infrastructure for deepfake imagery.
  • Enterprises: moderate compliance burden; expected to increase with AIDA enactment.

Sources