An analysis by WIRED and Indicator, examining publicly reported incidents, has documented nearly 90 schools and over 600 students across at least 28 countries affected by AI-generated deepfake nude images since 2023. The crisis represents a significant expansion of a problem that started slowly a couple of years ago but has grown considerably as nudification technology has become more accessible and easier to use.
The explicit imagery involves minors and is classified as child sexual abuse material (CSAM). In nearly all documented cases, teenage boys—predominantly in high schools—have allegedly used generative AI applications to target classmates with sexualized deepfakes created from photos downloaded from social media platforms like Instagram and Snapchat.
The geographic distribution reveals the global reach of the problem. North America has seen nearly 30 reported cases since 2023, including one involving more than 60 alleged victims and instances where multiple schools were targeted simultaneously. More than 10 cases have been publicly reported in South America, more than 20 across Europe, and approximately a dozen in Australia and East Asia combined. However, the analysis explicitly notes it reviewed only publicly reported incidents with specific details, mostly in English-language reporting. Many cases are handled privately by schools and law enforcement without press coverage.
The true scale extends far beyond these documented cases. A United Nations children's agency Unicef survey estimates that 1.2 million children had sexual deepfakes created of them last year. In Spain, one in five young people told Save the Children researchers that deepfake nudes had been created of them. A 2024 survey by the Center for Democracy and Technology found that 15 percent of students surveyed said they knew about AI-generated deepfakes linked to their school.
Victims report severe and lasting psychological harm. One Iowa victim stated, "I'm worried that every time they see me, they see those photos." Another victim's family reported the child "has been crying. She hasn't been eating." In multiple cases, victims have refused to attend school to avoid seeing those responsible for creating the material. Legal representation for victims emphasizes the lasting trauma: one lawyer representing a New Jersey teenager noted the victim "feels hopeless because she knows that these images will likely make it onto the internet and reach pedophiles" and must "monitor the internet for the rest of her life to keep them from spreading."
Schools have begun taking preventive measures. In South Korea and Australia, some schools have given pupils the option to opt out of yearbooks or stopped posting student images on official social media accounts. When imagery is used, schools are employing alternatives such as side profiles, silhouettes, backs of heads, distant group shots, creative filters, or approved stock photography to prevent the source material from being exploited for deepfake creation.
The underlying technology ecosystem enables rapid abuse. Dozens of apps, bots, and websites allow users to create sexualized images and videos with minimal technical knowledge and just a couple of clicks. While sexual deepfakes using AI have existed since around the end of 2017, the emergence of more powerful generative AI systems has created what researchers describe as a "shadowy ecosystem" of nudification and undress technologies. As one expert noted, "What AI changes is scale, speed, and accessibility."