Share this @internewscast.com
Access to three of the most visited deepfake nudification websites has been cut off for Australians after these platforms were implicated in the creation of AI-generated deepfakes involving schoolchildren.
Itai Tech, a company based in the UK, restricted access to several of its platforms following a warning from the eSafety Commission, which threatened legal action and a potential $49.5 million fine for not adhering to established codes and regulations.
The warning highlighted that their services were being misused in significant cases where Australian schoolchildren produced explicit deepfake images of classmates.
These sites enable users to upload photos of real individuals, including minors, and transform them into inappropriate depictions, such as in school uniforms, lingerie, or BDSM scenarios.
According to the eSafety Commission, Itai Tech’s platforms are among the most frequented globally, with an estimated 100,000 monthly visits originating from Australia.
“We are aware that ‘nudify’ services have been misused with harmful consequences in Australian schools. By restricting access from Australia, we anticipate a real reduction in the number of schoolchildren affected by AI-driven child sexual exploitation,” stated Commissioner Inman Grant.
”We took enforcement action in September because this provider failed to put in safeguards to prevent its services being used to create child sexual exploitation material and were even marketing features like undressing ‘any girl,’ and with options for ‘schoolgirl’ image generation and features such as ‘sex mode’.”
Itai Tech also blocked UK users from its website after it was fined £50,000 ($101,000) earlier this month for not having age checks.
Global AI model hosting platform Hugging Face has also changed its terms of service after some models were misused to create deepfake sexual exploitation material of real children and survivors of sexual abuse.
Following a warning from the eSafety Commission, the platform has instructed all account holders to take steps to minimise the risks associated with the models, specifically to prevent the generation of child sexual exploitation or pro-terror material.
eSafety said it was targeting the AI consumer tools, as well as the underlying models that power them and the platforms that host them.