Source: Department for Science Innovation and Technology published on this website Wednesday 12 November 2025 by Jill Powell
New legislation sees government work with AI industry and child protection organisations to ensure AI models cannot be misused to create synthetic child sexual abuse images.
Children will be better protected from becoming victims of horrific indecent deepfakes as the government introduces new laws to ensure Artificial Intelligence (AI) cannot be exploited to generate child sexual abuse material.
Data from the Internet Watch Foundation released Wednesday 12 November shows reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. (note)
There has also been a disturbing rise in depictions of infants, with images of 0–2-year-olds surging from 5 in 2024 to 92 in 2025. (note)
Under stringent new legislation, designated bodies like AI developers and child protection organisations, such as the Internet Watch Foundation (IWF), will be empowered to scrutinise AI models, and ensure safeguards are in place to prevent them generating or proliferating child sexual abuse material, including indecent images and videos of children.
Currently, criminal liability to create and possess this material means developers can’t carry out safety testing on AI models, and images can only be removed after they have been created and shared online. This measure, one of the first of its kind in the world, ensures AI systems’ safeguards can be robustly tested from the start, to limit its production in the first place.
The laws will also enable organisations to check models have protections against extreme pornography, and non-consensual intimate images.
While possessing and generating child sexual abuse material is already illegal under UK law, both real and synthetically produced by AI, improving AI image and video capabilities present a growing challenge.
It is known that offenders who seek to create this heinous material often do so using images of real children - both those known to them and those found online - and attempt to circumnavigate safeguards designed to prevent this.
This measure aims to make such actions more difficult by empowering companies to ensure their safeguards are effective and to develop innovative, robust methods to prevent model misuse.
It comes as new Internet Watch Foundation data also shows the severity of the material has intensified over the past year. Category A content - images involving penetrative sexual activity, images involving sexual activity with an animal, or sadism - rose from 2,621 to 3,086 items, now accounting for 56% of all illegal material compared to 41% last year. (note)
Girls have been overwhelmingly targeted, making up 94% of illegal AI images in 2025.(note)
To ensure testing work is carried out safely and securely, the government will also bring together a group of experts in AI and child safety.
The group will help design the safeguards needed to protect sensitive data, prevent any risk of illegal content being leaked, and support the wellbeing of researchers involved.
These changes, which will be tabled today (Wednesday 12 November) as an amendment to the Crime and Policing Bill, mark a major step forward in safeguarding children in the digital age.
They reflect the government’s commitment to working hand-in-hand with AI developers, tech platforms, and child protection organisations to build a safer online world for children.
We all want the UK to be the safest place in the world to be online, particularly for children, and this includes when using AI Models. This measure aims to help us achieve that goal by making AI models used by the British public safer and more robust at preventing offenders from misusing this exciting technology for criminal activity.
This proactive approach not only protects children from exploitation and re-victimisation but also reinforces public trust in AI innovation - proving that technological progress and child safety can go hand in hand.