Social media algorithms exposing children to violent pornographic content, report shows
A report by the Children’s Commissioner reveals that social media algorithms are pushing unsolicited, violent, pornographic content to children who use these platforms
Social media algorithms are pushing unsolicited pornographic content into children’s feeds, according to a report by the Children’s Commissioner.
The data was collected prior to the implementation of the Online Safety Act, but provides a snapshot on the types of harmful content being shown and accessed by children online and how that content affects them.
According to the report, 70% of respondents, whose ages were between 16 to 21 years old, had seen pornography, with the average child reporting to have seen this type of content at the age of 13, and more than a quarter having seen it by the age of 11.
The respondents exposed to pornographic content online said that eight out of the top 10 sources of this content were social media or social networking websites.
According to the report, X (formerly known as Twitter) was the largest platform where children encountered pornography, with 45% of respondents, making it more likely for children to find pornography there than on dedicated pornographic websites.
Other social media companies popular among children also show up in the survey with what the report states to be “concerning frequency”. These include: Snapchat (29%), Instagram (23%), TikTok (22%) and YouTube (15%).
Strikingly, 59% reported seeing pornography online by accident, which is up from 38% in 2023. Mark Jones, a legal Partner at Payne Hick Beach, said that “children are viewing harmful content due to algorithms used by platforms, rather than actively searching it out themselves”.
Harmful content
Jones, who is part of the Dispute Resolution Department, and represents both individuals and corporations, added: “Under the Online Safety Act and the child safety duties, platforms are required to stop their algorithms from recommending harmful content. This, coupled with age assurance measures, aims to protect children in the online world. The algorithms should filter out harmful content from reaching children in the first place.”
The report actively supports the introduction of Ofcom’s new age verification measures and the implementation of the Children’s Code, which requires social media websites to make changes to prevent children seeing this type of harmful content.
“The Children’s Code came into force from 25 July 2025,” said Jones. “It will be interesting to see what changes, if any, are seen in this area. In particular, whether platforms are effectively moderating content and no longer using toxic algorithms to filter out harmful content being accessed by children.”
Additionally, the report emphasises that the majority of pornographic content seen by respondents depicted acts that are illegal under existing pornography laws. For example, 58% of respondents had seen porn depicting strangulation when they were under the age of 18. Furthermore, 44% reported seeing a depiction of rape.
The report emphasises that this has a detrimental effect on children’s interactions with one another, affecting their expectations around sex and body image.
A spokesperson for the Children’s Commissioner told Computer Weekly the link between exposure to pornography and harm caused to children’s behaviour was very significant based on direct self-reporting from these children.
“Children have told the Children’s Commissioner they expect to be experiencing violence in a relationship, or they expect their first interactions of a sexual nature to be like what they’re seeing in pornography, because that’s what they’re exposed to,” they said.
Depiction of women
Particularly concerning is the depiction of women, who are more commonly shown being on the receiving end of sexually aggressive acts than men were, which the report finds leads to violent perceptions of sex that targets women.
The spokesperson says that the commission found through surveys and research particularly for girls, who had seen violent pornography, their own depictions of consent became clouded.
“Girls who have seen pornography were far more likely to agree with the statement that girls who say ‘no’ can be persuaded to have sex,” they said. “So, they might say ‘no’ to start with, but then are now expecting to be persuaded otherwise. The idea of consent that has been enshrined in our education system through sex education, and relationships education, over the last 10 to 15 years seems to be on rocky ground.”
While social media presents itself as the first point of contact for many of these children, the Children’s Commissioner reiterated that algorithms are not necessarily evil, but rather tech companies are not optimising their search engines to remove this content from children.
“Tech companies know who their young users are,” said the spokesperson. “They do have the ability to recognise and monitor user activity. There must be a greater focus and less ambiguity about who you direct that algorithmic content to if it’s a young user. It should simply either be stopped before it even gets to their feed, or there has to be a much more stringent way of keeping them off the site, and we are yet to see that with sites like X.”
The report recommends that online pornography be made to meet the same content requirements as offline pornography, so that the depiction of non-fatal strangulation is outlawed.
It also calls for the government to explore options that prevent children from using virtual private networks (VPNs) to bypass the Online Safety Act’s regulations, and further funding for schools to implement the new Relationships, Health and Sex Education (RHSE) curriculum, including a recruitment drive for specialist RHSE teachers. “This has to be a benchmark against the success of the Online Safety Act. We will repeat this survey again next year to see if there is any significant change in what children are able to access,” the report added.
Read more about the Online Safety Act
In this essential guide, Computer Weekly looks at the UK’s implementation of the Online Safety Act, including controversies around age verification measures and the threat it poses to end-to-end encryption.
Adele is a member of Families and Survivors to Prevent Online Suicide Harms campaign, a network that brings together survivors and families bereaved by online harm-related suicides. They are calling for changes to the enforcement of the Online Safety Act.
The communications regulator, Ofcom, has not directly responded to the report’s findings, but has previously stated that: “Tech firms must introduce age checks to prevent children from accessing porn, self-harm, suicide and eating disorder content” and “expect to launch any investigations into individual services” that do not meet the compliance.
There have been several calls to implement stricter regulation on social media algorithms, which have reportedly fuelled misinformation, and other harmful content.
The Commons Science, Innovation and Technology Committee had previously attributed the spread of misinformation to algorithms that prioritise advertising and engagement-based business models to generate revenue, without implementing tools in their systems to deprioritise harmful content.
Algorithms function based on machine learning artificial intelligence models, and can develop biases, prioritising shocking content that generates clicks. This technology can itself be repurposed to reinforce positive social outcomes and remove harmful content from being shown to users.
Originally published at ECT News