
Millions of children are at risk of facing exploitation and abuse through exposure to and having their images being manipulated through generative AI tools.
New findings from the United Nations Children’s Fund (UNICEF) reveal that millions of children are having their images manipulated into sexualised content through the use of generative artificial intelligence (AI), fuelling a fast-growing and deeply harmful form of online abuse. The agency warns that without strong regulatory frameworks and meaningful cooperation between governments and technology platforms, this escalating threat could have devastating consequences for the next generation.
A 2025 report from the Childlight Global Child Safety Institute—an independent organisation that tracks child sexual exploitation and abuse—shows a staggering rise in technology-facilitated child abuse in recent years, increasing from 4,700 cases in the United States in 2023 to more than 67,000 in 2024. A significant share of these incidents involved deepfakes: AI-generated images, videos and audio engineered to appear realistic and often used to create sexualised content. This includes widespread “nudification”, where AI tools strip or alter clothing in photos to produce fabricated nude images.
A joint study by UNICEF, Interpol and End Child Prostitution in Asian Tourism (ECPAT) International examined rates of child sexual abuse material (CSAM) circulated online across 11 countries and found that at least 1.2 million children had their images manipulated into sexually explicit deepfakes in the past year alone. This means roughly one in every 25 children—or one child in almost every classroom—has already been victimised by this emerging form of digital abuse.
“When a child’s image or identity is used, that child is directly victimised,” a UNICEF representative said. “Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children who need help. Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
A 2025 survey by the National Police Chiefs’ Council (NPCC) examined public attitudes towards deepfake abuse, finding that reported incidents surged by 1,780 percent between 2019 and 2024. In a UK-wide representative survey conducted by Crest Advisory, nearly three in five respondents said they were worried about becoming victims of deepfake abuse.
Additionally, 34 percent admitted to creating a sexual or intimate deepfake of someone they knew, while 14 percent said they had created deepfakes of someone they did not know. The research also found that women and girls are disproportionately targeted, with social media identified as the most common platform for spreading such content.
The study presented respondents with a scenario in which a person creates an intimate deepfake of their partner, discloses it to them and later distributes it to others following an argument. Alarmingly, 13 percent said this behaviour should be both morally and legally acceptable, while a further 9 percent expressed neutrality. The NPCC reported that those holding such views were more likely to be younger men who actively consume pornography and agree with beliefs commonly regarded as misogynistic.
“We live in very worrying times. The futures of our daughters—and sons—are at stake if we do not take decisive action in the digital space soon,” award-winning activist and internet personality Cally-Jane Beech told the NPCC. “We are looking at a whole generation of children who grew up with no safeguards, laws or rules in place, and we are now seeing the dark ripple effects of that freedom.”
Deepfake abuse can have severe and lasting psychological and social consequences for children, often triggering intense shame, anxiety, depression and fear. In a recent report, UNICEF noted that a child’s “body, identity and reputation can be violated remotely, invisibly and permanently” through deepfake abuse, alongside risks of threats, blackmail and extortion. The permanence and viral spread of digital content can leave victims with long-term trauma, mistrust and disrupted social development.
“Many experience acute distress and fear upon discovering that their image has been manipulated into sexualised content,” said Afrooz Kaviani Johnson, Child Protection Specialist at UNICEF Headquarters. “Children report feelings of shame and stigma, compounded by the loss of control over their own identity. These harms are real and lasting.”
Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunication Union (ITU), added that online abuse can also translate into physical harm.
In a joint statement on Artificial Intelligence and the Rights of the Child, key UN bodies—including UNICEF, ITU, the Office of the UN High Commissioner for Human Rights (OHCHR) and the Committee on the Rights of the Child (CRC)—warned of a widespread lack of AI literacy among children, parents, caregivers and teachers. This knowledge gap leaves young people particularly vulnerable and makes it harder to recognise abuse, report it or access adequate protection and support services.
The UN also stressed that tech platforms bear significant responsibility, noting that most generative AI tools lack meaningful safeguards to prevent digital child exploitation.
“Deepfake abuse thrives in part because legal and regulatory frameworks have not kept pace with technology,” Johnson said. “In many countries, laws do not explicitly recognise AI-generated sexualised images of children as child sexual abuse material.”
UNICEF is urging governments to update CSAM definitions to include AI-generated content and to explicitly criminalise both its creation and distribution. Technology companies, Johnson said, should be required to adopt “safety-by-design measures” and conduct “child-rights impact assessments”.
He cautioned, however, that laws alone would not be enough. “Social norms that tolerate or minimise sexual abuse and exploitation must also change. Protecting children will require better laws, stronger enforcement and real shifts in attitudes.”
Commercial incentives further compound the problem, with platforms benefiting from increased engagement and subscriptions generated by AI image tools, reducing motivation to adopt stricter safeguards. As a result, companies often introduce guardrails only after major public backlash.
One such example is Grok, the AI chatbot of X (formerly Twitter), which was found generating large volumes of non-consensual sexualised deepfake images in response to user prompts. Following international criticism, X announced in January that Grok’s image generator would be limited to paid subscribers.
Investigations are ongoing. The United Kingdom and the European Union have opened probes, and on February 3, French prosecutors raided X’s offices as part of an investigation into the platform’s alleged role in circulating CSAM and deepfakes. X owner Elon Musk was summoned for questioning.
UN officials stressed the need for regulatory frameworks that protect children while allowing responsible AI development. “With responsible deployment, you can still innovate and make a profit,” a senior UN official said. “But when technology risks serious harm, we must raise a red flag.”