Home   News   Article

Subscribe Now

Addressing online safety

In his weekly column, MP James Wild ponders a safer internet…

My first phone was a Samsung flip that looked like a Star Trek walkie talkie. I still have it in a drawer. At that stage the mobile internet was in its infancy compared to today where smart phones are ubiquitous. The technology we enjoy now brings tremendous benefits and access to limitless content but it also means there need to be protections in place to keep children safe online.

Many constituents have contacted me concerned at the impact of harmful content online on their children and how to keep them safe. There is much greater awareness of the way apps are designed to keep children online for longer. When I visit schools, teachers are clear that online bullying enabled by mobile devices is causing real harms.

MP for North West Norfolk
MP for North West Norfolk

That must change and through the Online Safety Act, for the first-time tech firms are now legally responsible for content on their sites and must assess the risk their services pose to children and implement safety measures to mitigate the risks. The regulator Ofcom has just set out more detail on how this will work in practice for search engines, social media sites, and apps.

First, providers will have to carry out robust age checks to stop children accessing harmful content. Services will need to know which of their users are children in order to protect them. This could lead to children being prevented from accessing the entire site or app or restricting access to parts of it.

Second, tech firms must stop their algorithms recommending harmful content to children including suicide, self-harm, and pornography. We have seen tragic cases where young people have taken their lives after been exposed to high volumes of unsolicited, dangerous content with devastating consequences. Now services will have to filter out the most harmful content.

Third, there needs to be better moderation of content on services. Research found that over a month 62% of children aged 13-17 report encountering online harm including bullying with many depressingly considering it an unavoidable part of their lives. Worrying research also suggests exposure to violent content begins in primary school and the frequent exposure to self-harm and other dangerous content means it becomes normalised.

To tackle this harm, all user-to-user services must have content moderation systems to ensure swift action is taken against harmful content to children. The same will apply to Google and other search engines who will have to filter out content for children. Technology through AI and other automated tools have an important role to play in this.

Some people favour banning under 16s from having mobile devices at all. Yet such a blunt approach does not differentiate between ages groups and would deprive children of the educational benefits and enjoyment technology brings. As NSPCC has said blanket bans for teenagers would punish them for the failures of tech firms. Young Minds, a children’s mental health charity, has said children do not want access blocked - they want technology companies to build safety into their products.

That’s the right approach to pursue and these measures will make companies responsible for the safety of their services, ensure children have an experience appropriate to their age, and give parents more confidence that their children are safe online. If companies fail to act then there are strong enforcement powers.

This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies - Learn More