UK Technology Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Images

Technology companies and child safety organizations will be granted authority to assess whether AI systems can produce child exploitation material under recently introduced UK laws.

Significant Rise in AI-Generated Harmful Content

The declaration coincided with findings from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the authorities will allow approved AI companies and child safety groups to inspect AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have adequate safeguards to prevent them from producing images of child exploitation.

"Ultimately about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now detect the danger in AI models promptly."

Addressing Legal Obstacles

The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.

This law is designed to preventing that issue by enabling to halt the creation of those images at source.

Legislative Framework

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI models developed to generate exploitative content.

Real-World Consequences

This week, the official toured the London headquarters of a children's helpline and listened to a mock-up call to counsellors involving a report of AI-based abuse. The interaction portrayed a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a source of intense frustration in me and justified anger amongst families," he stated.

Alarming Data

A leading internet monitoring foundation reported that instances of AI-generated abuse material – such as online pages that may contain multiple images – had more than doubled so far this year.

Cases of the most severe material – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are released," stated the head of the online safety organization.

"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, giving criminals the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further exploits victims' suffering, and makes children, especially girls, more vulnerable both online and offline."

Counseling Interaction Data

Childline also released details of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions include:

  • Using AI to rate weight, physique and appearance
  • AI assistants discouraging young people from consulting trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-faked pictures

Between April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, including utilizing chatbots for support and AI therapy applications.

Robert Williams
Robert Williams

A seasoned financial analyst and writer passionate about empowering others through clear, actionable advice on money and life.