Share this @internewscast.com
A startling report has uncovered that in 2025, paedophiles used artificial intelligence to create over 3,000 child abuse videos. This troubling revelation comes from the Internet Watch Foundation (IWF), which has highlighted last year as having the highest number of AI-generated child sexual abuse material on record.
The IWF’s analysis revealed a staggering 26,362 percent surge in the production of photo-realistic AI videos depicting child sexual abuse. In 2025 alone, the foundation identified 3,440 such videos, a sharp increase from the mere 13 discovered in 2024.
Alarmingly, 65 percent of these AI-generated videos were classified under Category A, the most severe level of abuse, involving acts such as penetration, bestiality, and sexual torture.
Kerry Smith, Chief Executive of the IWF, emphasized the relentless efforts of their analysts, stating, “Our team works diligently to remove this material to provide some hope to the victims.”
Worryingly, 65 per cent of these videos were classified as the most extreme class of abuse, Category A, which can involve penetration, bestiality, and sexual torture.
Kerry Smith, Chief Executive of the IWF, says: ‘Our analysts work tirelessly to get this imagery removed to give victims some hope.
‘But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.’
Based on the findings, the IWF is calling for immediate action to ban the technology.
Paedophiles used artificial intelligence (AI) to generate a record 3,440 child abuse videos in 2025, a shocking report has revealed (stock image)
The ‘frightening’ increase in AI–generated child abuse material comes as the IWF reports its worst–ever year for online abuse material.
In 2025, IWF analysts took action on 312,030 reports where analysts confirmed the presence of child sexual abuse material.
This record–breaking high marks a seven per cent increase from the 291,730 confirmed reports in 2024.
A large part of that increase has been driven by the explosive growth in AI–generated content.
AI tools to create sexual abuse material are not new, but the last year saw criminals improve the technology to make more extreme content at a significantly faster rate.
The IWF now warns that this material can be made at scale by criminals with minimal technological knowledge.
Ms Smith explained: ‘The frightening rise in extreme Category A videos of AI–generated child sexual abuse shows the kind of things criminals want. And it is dangerous.
‘Easy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation, and further endanger children both on and offline.’
Analysis conducted by the Internet Watch Foundation warns that AI has given criminals access to ‘their own child sexual abuse machines’ (stock image)
In 2024, Hugh Nelson, then 27, was sentenced to 18 years’ imprisonment for using AI to alter photographs of real children to create sexual abuse images sent to him by paedophiles on the internet
Nor does the fact that this material is AI–generated mean that no children were harmed in its production.
AI child sex abuse material often uses the likeness of real children known to the abuser as a basis for the videos.
In 2024, Hugh Nelson, then 27, was sentenced to 18 years’ imprisonment for using AI to alter photographs of real children to create sexual abuse images.
The court found that Mr Nelson’s paying customers, who provided the photographs, were predominantly the fathers, uncles, family friends, or neighbours of the victims.
Additionally, the likenesses of identifiable victims can either be depicted in the abuse material or used to ‘train’ the image–generating AI.
Jamie Hurworth, Online Safety Act expert and dispute resolution lawyer for the firm Payne Hicks Beach, says: ‘The use of generative AI to create child sexual abuse material should not be a legal grey area. It is sexual exploitation, regardless of whether the images are “synthetic”.
‘What this news shows is the scale at which AI can turbo–charge harm if effective safeguards are not built in and enforced.’
This news comes as Elon Musk bowed to pressure and moved to prevent his Grok AI from creating sexualised pictures of real individuals.
This comes after X was forced to restrict image generation for Elon Musk’s Grok AI, as the bot repeatedly produced sexualised images of children or adults altered to look like children
The social media site X, formerly Twitter, had been flooded by users manipulating the AI image generator to create non–consensual sexualised images of women.
These images included sexualised images of children and adults digitally manipulated to look like children.
Ashley St Clair, mother of one of Elon Musk’s sons, is now suing X over AI–generated images which, according to court filings, include one of her as a 14–year–old stripped into a string bikini.
Elon Musk had previously defended X, saying that critics ‘just want to suppress free speech’, and posted two AI–generated pictures of Prime Minister Sir Keir Starmer in a bikini.
On Wednesday, X announced that it had ‘implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing’.
While Ofcom said that this was a ‘welcome development’, the watchdog added that its investigation into whether X broke UK laws ‘remains ongoing’.
However, the standalone Grok app, Grok Imagine, is reportedly still capable of producing nude images which can be posted to X.
As the IWF points out, under current legislation, it is extremely difficult for authorities to test whether an AI tool can be misused without committing an offence if any imagery is inadvertently created in the process.
Ms St Claire alleges that the Grok AI had been used to create child sexual abuse material (CSAM) depicting her as a four–year–old girl
Under new rules proposed in November, designated bodies like the IWF and AI developers will be given powers to scrutinise AI models to ensure they cannot be used to create nude or sexual imagery of children.
Additionally, in December, the government announced plans to outlaw AI ‘nudify’ apps that digitally remove clothes from photographs.
Tech secretary Liz Kendall said: ‘It is utterly abhorrent that AI is being used to target women and girls in this way.
‘We will not tolerate this technology being weaponised to cause harm, which is why I have accelerated our action to bring into force a ban on the creation of non–consensual AI–generated intimate images.’