Share this @internewscast.com
Despite the enactment of new laws aimed at bolstering online safety, teenagers are still bombarded with harmful content shortly after setting up social media profiles. This troubling reality persists, raising concerns about the effectiveness of such regulations.
Recent investigations have revealed that 15-year-olds who create accounts on platforms like Instagram or TikTok are quickly exposed to disturbing content, including racist, misogynistic, or violent material. Alarming findings show that these users encounter such content passively, without actively searching for it or giving consent.
In a striking example, one teenager encountered a distressing video depicting a man drugging and abducting a woman within just eight minutes of joining TikTok. Research conducted by The Cybersmile Foundation, an online safety charity, further underscores the issue, revealing that adults can be exposed to similar content in a mere 16 seconds.
Over the course of three days, up to 33% of posts shown to adults and 20% of those shown to teenagers contained material deemed harmful to viewers’ mental or physical well-being, often promoting dangerous behaviors.
The charity has raised alarms over these findings, asserting that social media users, including minors, are being ‘force-fed’ harmful content despite the Online Safety Act’s intentions. This legislation, which came into effect in July 2025, mandates that social media platforms have a legal responsibility to shield children from online dangers, particularly by restricting access to potentially damaging content.
The charity said social media users, including children, were being ‘force fed’ harmful videos, despite the introduction of the Online Safety Act. The act, which came into force in July 2025, gave social platforms a legal duty to protect children from online harm, including preventing them from accessing some damaging content.
Campaigners are calling for users to be given ‘complete control’ over the type of content they see, including the option to opt-out of any topic.
Scott Freeman, chief executive and founder of The Cybersmile Foundation, said: ‘Uncontrollable exposure to harmful content shouldn’t be the price that users are required to pay to use social media.
Teenagers are being fed harmful content within minutes of setting up social media accounts
‘We have seen improvements in user safety tools in recent years but most platforms still only allow you to say you’d like to see “more” or “less” of certain content types. There’s no option to say: “I don’t want to see this, turn it off”.’
For the study, led by The Cybersmile Foundation, eight adult participants used smartphones which had been reset to factory settings (to remove any identifying details) to set up new accounts on TikTok or Instagram. Four said they were adults over 25, while two posed as 15-year-old girls and two as 15-year-old boys.
Each scrolled on the main video feed of their account – the ‘Reels’ tab on Instagram or ‘For You’ page on TikTok – for 45 minutes per day for three days.
Participants documented how long it took for them to be exposed to harmful content, what harmful themes they saw and how much harmful content appeared each day.
They did not press ‘like’ or comment on any content, but if they were shown harmful content, they watched it two to three times.
This was to test whether the algorithms prioritised users’ wellbeing – by protecting them from harmful content – or their engagement, by serving up similar videos.
Freeman said it was often a natural instinct for social media users to pause and watch harmful content out of shock or horror, but when algorithms viewed this as ‘engagement’ they could show more of this type of content.
Participants in the study saw videos featuring hateful speech towards different groups – including promoting antisemitism, racism, misogyny and mocking people with disabilities – as well as content depicting and encouraging extreme violence, dangerous behaviour and suicide ideation.
The study found adult accounts were exposed to more harmful content overall, with this material comprising up to 38 per cent of all content they saw within three days on platforms.
Teenagers’ accounts were exposed to less harmful content overall, at up to 18 per cent. It also tended to be less extreme.
After three days, 90 per cent of users in the study had been served at least one racist video, 60 per cent had seen misogynistic content and 60 per cent saw violent content.
The research was carried out in September 2025, after the Online Safety Act came into force.
Now, The Cybersmile Foundation is calling for social media companies to introduce customisable content filters and parental controls, which Freeman says will empower people ‘to protect their wellbeing without compromising free speech’.
‘This is not about demonising social media platforms but offering a solution which enables people to use social media safely and empowers them to have control over what they consume,’ he added.
Dr Jo Hickman Dunne, research fellow in adolescent mental health at the University of Manchester, who independently reviewed the study, said: ‘Young people tell us that content they do not want to see makes it into their social media feed… Social media systems have been designed to prioritise engagement over wellbeing.
‘We have the capacity to change this, for the wellbeing of young people and all [social media] users.’
Meta said the study was not robust enough to accurately reflect user experience.
A spokesman said: ‘Teen Accounts have built-in, default protections and content settings inspired by +13 film ratings. Hundreds of millions of teens worldwide now use Teen Accounts and, since launch, they’ve seen less sensitive content, experienced less unwanted contact, and spent less time on Instagram overnight.’
A TikTok spokesman said: “On TikTok, teen accounts have more than 50 preset safety features and settings so that young people can safely discover what they love and learn new things.
‘Of the content we remove for breaking our rules, 99 per cent is found before it is reported to us and, with a sample size of four teens, this research in no way reflects the typical teen experience on our platform.’