Social media sites could be banned for use by children if they breach new rules proposed to enhance online safety.
They will be required to prevent children seeing the type of content Dundee schoolgirl Sophie Parkinson did before her death 10 years ago.
Her mother Ruth Moss told BBC Breakfast she hopes the restrictions published today by regulator Ofcom are not a “box-ticking exercise”.
High School of Dundee pupil Sophie, who lived in Liff, Angus, was only 13 when she took her own life in 2014.
Ruth said they had parental controls at home but Sophie would use the internet on the bus home.
She told the BBC: “She actually managed to investigate and research how she was going to die by suicide and it was blatant.
“No 13-year-old should see that sort of material.”
So how could the new rules protect children and young people from seeing the type of content that Sophie did?
What are the online safety rules proposed by Ofcom?
The UK’s online safety regulator Ofcom has published a series of rules to enforce the Online Safety Act approved last October.
Once adopted, these must be followed by social media apps, search and other online services such as Facebook, TikTok, WhatsApp and Instagram.
Operators will have to mitigate risks to children, including from their functionalities and algorithms.
This should prevent children from seeing the most harmful content relating to suicide, self-harm, eating disorders and pornography.
It should also minimise children’s exposure to other serious harms including violent, hateful or abusive material, bullying content and promotion of dangerous challenges.
Algorithms recommend personalised content to users based on content they have previously engaged with. Dangerous online challenges have included the blackout challenge where people held their breath until they passed out.
The key safety measures from more than 40 proposed are:
- Robust age checks – if services don’t ban harmful content they should introduce highly effective age-checks. Checks could include photo-ID matching or facial age estimation.
- Safer algorithms – these must be designed to filter out the most harmful content from children’s feeds. Other harmful material should be downranked. Children must also be able to give feedback if there is content they don’t want to see.
- Effective moderation – content moderation systems must be used to take quick action on harmful content. Large search services should use a ‘safe search’ setting for children.
When will the rules come into force – and what if they are broken?
Ofcom is consulting on these draft rules until July 17.
Its finalised rules are likely to be published in spring next year and come into force in the second half of 2025.
League tables will be published to allow the public to see which companies are making the required changes and which are not.
Any companies that break the rules could have their minimum user aged raised to 18.
Breaching the Online Safety Act could also result in fines up to 10% of global revenue for tech companies or £18m, whichever is greater. Bosses could be imprisoned.
Conversation