UK Tech Firms and Child Protection Agencies to Test AI's Ability to Generate Exploitation Content

Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence systems can produce child abuse images under recently introduced British laws.

Significant Increase in AI-Generated Illegal Content

The declaration coincided with revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the amendments, the authorities will allow approved AI companies and child protection groups to examine AI models – the foundational systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child sexual abuse.

"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now detect the risk in AI systems early."

Tackling Regulatory Challenges

The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at preventing that problem by enabling to stop the production of those images at source.

Legal Structure

The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models designed to create exploitative content.

Real-World Consequences

This week, the minister visited the London base of a children's helpline and listened to a mock-up call to counsellors featuring a account of AI-based exploitation. The call depicted a teenager requesting help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.

"When I learn about young people experiencing extortion online, it is a cause of intense anger in me and rightful concern amongst families," he said.

Concerning Statistics

A leading internet monitoring organization stated that instances of AI-generated exploitation material – such as webpages that may include numerous files – had significantly increased so far this year.

Instances of category A material – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "constitute a crucial step to ensure AI products are secure before they are released," commented the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a simple actions, providing offenders the capability to make potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which further exploits victims' suffering, and renders young people, especially girls, less safe both online and offline."

Counseling Session Information

The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Employing AI to evaluate body size, body and appearance
  • AI assistants dissuading children from consulting trusted adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-faked images

During April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellness, including using AI assistants for support and AI therapeutic applications.

Melissa Fuller
Melissa Fuller

A seasoned gaming analyst with over a decade of experience in casino strategy development and player education.