British Technology Companies and Child Protection Officials to Examine AI's Capability to Generate Exploitation Content

Technology companies and child safety agencies will receive authority to evaluate whether artificial intelligence systems can produce child abuse images under recently introduced UK laws.

Substantial Rise in AI-Generated Harmful Material

The announcement coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the government will allow designated AI developers and child protection organizations to inspect AI systems – the foundational technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating depictions of child exploitation.

"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict protocols, can now identify the risk in AI systems early."

Tackling Legal Obstacles

The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that problem by enabling to stop the production of those materials at source.

Legislative Structure

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, producing or sharing AI models designed to generate child sexual abuse material.

Practical Impact

This week, the official visited the London base of Childline and listened to a simulated conversation to counsellors involving a report of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a cause of extreme anger in me and justified concern amongst parents," he stated.

Alarming Data

A leading internet monitoring organization reported that cases of AI-generated abuse material – such as webpages that may include numerous images – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI images in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a crucial step to ensure AI tools are safe before they are released," stated the head of the internet monitoring foundation.

"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, giving offenders the capability to make possibly endless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Material which further commodifies victims' suffering, and makes young people, especially female children, less safe both online and offline."

Counseling Session Information

Childline also released information of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Employing AI to evaluate weight, body and looks
  • Chatbots discouraging young people from consulting safe adults about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated images

During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were related to mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapy apps.

Elizabeth Chaney
Elizabeth Chaney

Elara is a digital artist and designer passionate about blending traditional techniques with modern technology to create stunning visuals.