Social media platforms have come under increasing scrutiny as instances of inappropriate content have emerged.
And with kids spending more time than ever online during the pandemic, parents and carers are rightly worried about how to keep them safe when using social media.
But what measures do these companies put in place to stop such material getting onto the platforms? And could they be doing more to keep children and young people safe when they are online?
We contacted Snapchat and TikTok to get their response to these concerns.
‘An alternative to traditional social media’
A Snapchat representative detailed the safety precautions the platform had set up to ensure inappropriate content does not find its way onto the app.
They also stressed that the app was different to other social media platforms in that there is “no way” for rogue accounts to broadcast content to all users on the platform.
The rep said: “We believe we have a responsibility to provide our users with a safe, positive and personal experience on our platform.
“Snapchat is designed as an alternative to traditional social media—a place where close friends can connect, strangers cannot broadcast to everyone on the platform, and popularity and content is not measured by virality metrics.
“We offer no way for unvetted accounts to broadcast to our entire user base, which means Snapchat is not a platform where anyone can distribute anything to anyone.”
“Our content platform, Discover, is closed, and individual Snapchat users cannot share content to a wide public audience. Like a television network, only media brands and content creators who we have chosen to work with us have the ability to distribute content to large groups of Snapchatters.
We encourage anyone who sees any violating content to report it immediately using our in app reporting tools, so our Trust and Safety team can take action.”
Snapchat spokeswoman
“All new features go through an intense privacy review process – where our privacy engineers and privacy lawyers vet all features that touch a user before they are released. We have always used this ‘privacy-by-design’ approach and won’t release a feature that doesn’t pass this vet.”
Snapchat, which was launched in 2011, allows users to communicate in private groups – a feature which the platform insists keeps those who use the app safe.
The rep added: “In private (one-to-one or small group) communication it is difficult for one single account or post to gain traction or ‘go viral’ because by default Snapchat accounts are set to friends only. There are no likes, shares or comments on Snapchat and group chats are limited to 31 people.
“User content on Snapchat is designed to delete by default, meaning that the majority of snaps and stories will automatically be deleted once opened by the intended recipient(s) or within 24 hours of being posted.”
Shock as parents learn pupils at Dundee schools ‘share Snapchat suicide clip’
The spokeswoman also pointed to rules and regulations that users of the app have to adhere to and what action they themselves can take if they spot inappropriate content.
She said: “We have clear community guidelines and terms of service that tell Snapchatters what type of content is acceptable to post on Snapchat.
“We encourage anyone who sees any violating content to report it immediately using our in app reporting tools, so our Trust and Safety team can take action.
“They work around the clock to review abuse reports and take action when they become aware of a violation. In the vast majority of cases, they enforce reported in-app content well within two hours.
“When we are notified that a Snapchatter is violating our rules, we promptly investigate and remove the offending content and, if appropriate, may terminate the account.”
TikTok
The video sharing platform came under fire recently after a video of a man taking his own life went viral on the site.
But representatives for the company defended the safety measures it has in place to deal with such content, outlining what action is taken when it is flagged up.
A TikTok spokesman said: “Recently, clips of a suicide that had originally been live-streamed on Facebook circulated on other platforms, including TikTok.
“Our systems, together with our moderation teams, detected and removed these clips for violating our policies against content that displays, praises, glorifies, or promotes suicide.
“We banned accounts that repeatedly tried to upload clips, and we appreciate our community members who reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family.”
The spokesman also outlined what actions parents can take to keep their children safe when they are using TikTok and where they can find more information on them.
He added: “We have built several tools to help parents manage their child’s experience on TikTok. This includes controls on what content they can see, and how long they can spend online. Parents can read about these tools in our safety center.
“We also wrote to the leaders of nine other platforms, offering to work together to further protect our users from harmful content.
“If anyone in our community is struggling with thoughts of suicide or concerned about someone who is, we encourage them to seek support, and we provide access to hotlines directly from our app and in our safety center.”