Deepfake case study · Image
Users of GenNomis AI tool deepfake (Apr 2025)
A data leak at South Korean AI company GenNomis exposed 95,000 files, including explicit and potentially illegal content generated by users of their AI tool. This included deepfakes and what appeared to be child…
- Incident date
- Apr 2025
- Target
- Users of GenNomis AI tool
Updated May 6, 2026 · 1 min read
A data leak at South Korean AI company GenNomis exposed 95,000 files, including explicit and potentially illegal content generated by users of their AI tool. This included deepfakes and what appeared to be child sexual abuse material (CSAM). The unprotected database revealed the prompts users had been using to generate these images.