Save the Children exposes Spain’s deepfake crisis with 20% of youth victimized

MADRID, 10 July 2025 — A new report from Save the Children has unveiled a chilling reality: one in five young people in Spain have fallen victim to AI-generated nude “deepfake” images created without their consent while they were underage. This alarming finding is part of a broader study on online sexual violence affecting Spanish youth, revealing that nearly 97% of respondents reported experiencing some form of abuse before turning 18. The report, titled “Redes que atrapan” (Nets that Trap), underscores the urgent need for robust legal and educational responses to safeguard the digital lives of children.

AI Deepfakes: A New Frontier of Digital Exploitation

The investigation, conducted by Save the Children Spain in collaboration with the European Association for the Digital Transition, surveyed more than 1,000 individuals aged 18–21 between March and April 2025, asking about their experiences as minors. Its findings expose the pervasive nature of online threats:

  • 20% of respondents stated that nude images of themselves were created via Artificial Intelligence (AI) without their consent during adolescence. This highlights AI deepfake technology as a rapidly emerging tool for abusers, compounding traditional forms of exploitation.
  • 97% reported experiencing some type of online sexual abuse, encompassing a range of harms including grooming, sextortion, and the non-consensual sharing of intimate images.

As noted by Catalina Perazzo, Save the Children’s Director of Influence, on July 9, 2025, “These figures represent only the tip of the iceberg,” primarily due to underreporting by victims and significant challenges in detecting and tracking online abuse, especially when content is AI-generated and distributed across encrypted platforms. The rise of such sophisticated digital threats poses an escalating challenge for law enforcement and child protection agencies globally, a concern frequently highlighted in our Global Conflicts coverage which often examines digital threats and cybercrime.

Hidden Threats Behind Voluntary Sharing

The study also delved into the motivations and perceptions surrounding the sharing of intimate content among young people. While some content may be shared voluntarily, nearly half (48%) of respondents admitted they did not recognize the potential dangers involved. The top reasons cited for sharing included seeking attention (42%), affection, or peer validation (40%). Disturbingly, approximately 27% of respondents had sent such intimate images as minors, while a significant portion faced coercion: 26% felt pressured to send content, and 20% experienced direct threats or blackmail.

A recent case in Alicante, documented in the Save the Children report, underscores the severe impact of these new forms of abuse: a 12-year-old girl was blackmailed into sharing a video with sexual content under the threat of exposing AI-generated nude images—images she had never produced herself. This case exemplifies how AI can amplify the psychological distress and coercive power of abusers, even when no real-life intimate image was initially shared.

Gendered Vulnerabilities and Legislative Gaps

The research indicates that girls, in particular, face an elevated risk of certain forms of online abuse. The study shows that 36% of young women reported sexual contact initiated by adults via social platforms, compared to 26% of young men. However, the report stresses that abusive behaviors, including deepfake creation and sextortion, often transcend gender, affecting individuals across the spectrum.

Despite growing calls for legal reform and enhanced digital education, Spain, like many other European nations, still faces significant legislative gaps in comprehensively addressing AI-generated sexual imagery. While Spain passed a law to regulate AI content and combat deepfakes in March 2025, imposing fines on companies for failing to label AI-generated content, EU-level experts are calling for expanded child protection laws to explicitly categorize deepfake sexual content as abuse. As a review in MDPI Legal Studies from May 2025 notes, current EU frameworks, including parts of the newly enacted EU AI Act, may not explicitly cover AI-generated non-consensual intimate imagery (NSII) within traditional legal definitions of child sexual abuse material (CSAM), creating a dangerous loophole that needs closing. This legislative challenge is part of a broader European effort to define ethical AI use, a topic explored in our EU Politics analyses.

Education, Laws, and Digital Literacy: A Multi-pronged Approach

Save the Children advocates for a strengthened legal framework under Spain’s LOPIVI law (Organic Law on Comprehensive Protection of Children and Adolescents against Sexual Violence) to explicitly include AI-generated abuse. They also stress the urgent need for comprehensive and ongoing digital education integrated into early school years, as detailed in the group’s “Redes que atrapan” campaign and accompanying educational initiatives. They further recommend robust bystander training and easily accessible reporting mechanisms like Spain’s Infància Respon system.

Part of the solution lies in enhanced national and EU-level cooperation. The Advocate for Children and Youth Commission in Ireland recently highlighted AI-assisted deepfake abuse in court cases, while Europol has issued stark warnings on the rapid growth in AI-generated child sexual abuse content. A Europol press release from February 2025 detailed “Operation Cumberland,” a global hit against AI-generated CSAM that led to 25 arrests across 19 countries, underscoring the international nature of this crime and the challenges posed by lacking national legislation. This demonstrates the critical need for coordinated efforts across European borders, a theme central to discussions within our European Defense and security coverage.

Policy Implications for Spain and the EU

This widespread victimization underscores an urgent need for legislative and practical reform. Spain must align its domestic laws with EU directives on child protection and AI regulation under frameworks like the Digital Services Act (DSA) and the upcoming AI Act. While the EU AI Act, which became fully applicable in certain aspects from February 2025, classifies manipulated media like deepfakes under “limited risk” with transparency obligations, child protection advocates argue for stronger classifications and outright prohibition of NSII. The European Parliament’s official page on the EU AI Act emphasizes safety and fundamental rights, indicating the direction for future amendments.

Furthermore, greater funding for digital education, targeted parental awareness campaigns, and increased responsibility for Internet Service Providers (ISPs) to proactively detect and remove illicit content are paramount. These measures are critical for fostering safer online environments and protecting vulnerable populations, a topic also relevant to broader discussions on digital market regulation and its impact on user safety.

Citizens, Not Targets: Protecting the Digital Generation

Catalina Perazzo stresses that the paradigm must shift from viewing children as mere content producers to recognizing and protecting them as individuals with inherent rights and dignity. Schools, policymakers, technology platforms, and parents must collaborate effectively to preserve safety and dignity online.

As Save the Children warns, if left unchecked, this crisis risks normalizing digital exploitation and eroding fundamental trust in digital spaces, with long-term consequences for society. It is imperative that authorities across Spain and Europe act decisively and with urgency to protect the next generation from these insidious and evolving forms of online sexual abuse. For more in-depth analyses on social issues and policy responses, readers can explore our Editors’ Opinion section.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments