UNICEF reports over 1.2 million children globally have had images manipulated into sexual deepfakes, urging urgent AI safeguards and legal reforms
UNITED NATIONS: The UN children’s agency has warned of a rapid increase in the use of artificial intelligence to create sexually explicit images of children.
A UNICEF-led investigation across 11 countries found at least 1.2 million children reported their images had been manipulated into sexually explicit deepfakes.
The findings highlight the proliferation of “nudification” tools that digitally alter clothing to create sexualised images. “We must be clear.
Sexualised images of children generated or manipulated using AI tools are child sexual abuse material,” UNICEF stated.
The agency stressed that “deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
It criticised AI developers for creating tools without adequate safeguards, noting risks are compounded when such tools are integrated into social media platforms.
Elon Musk’s AI chatbot Grok has faced bans and investigations in several countries for allowing users to create sexualised images with simple text prompts.
UNICEF’s study also revealed growing awareness and concern among children about the threat of deepfakes.
In some study countries, up to two-thirds of children expressed worry that AI could be used to create fake sexual images or videos.
UNICEF called for “robust guardrails” for AI chatbots and proactive measures by digital companies to prevent the circulation of deepfakes.
The agency urged all countries to expand legal definitions of child sexual abuse material to include AI-generated imagery.
The study included Armenia, Brazil, Colombia, the Dominican Republic, Mexico, Montenegro, Morocco, North Macedonia, Pakistan, Serbia, and Tunisia.








