Why do we need the online safety bill?

Internet safety

Written by Andy Robinson, Marketing and Communications Officer, SWGfL

We spoke about the ground-breaking milestone that was the Online Safety Bill being introduced to parliament last week. Across the UK, it made headlines as more and more people became aware of what new measures would be coming into place to help better protect users online. Now, with the online safety bill being such a talked about topic, we look back on how it came to be and what it is trying to achieve.

 

An internet without online safety law

When the internet was first introduced, there was very little that needed to be regulated. Until 1993, it was primarily used for military purposes as a communication tool. When the ‘Word Wide Web’ launched, it was essentially used as a basic information finder that allowed anyone to search for websites or content. It wasn’t until the mid-late 1990s when search engines and online companies started to develop that the internet became the life-changing tool we see today.

With more users came more opportunities for innovation. The 2000s saw the rise of many things, including the game-changing social media platforms, amongst others. Users were able to socialise and engage with others online as well as search and view content that could range from harmless to harmful. Not only that, but with the rise of personal information being shared online, cyber-security was fast becoming an area to address with the rise of hacking and computer viruses.

As the internet grew, so did online safety. In many instances, the rules around how to keep safe online were still being figured out through the new developments that were being seen. The same can be seen in so many other sectors, for example the automotive sector, with the publication 50 Years Ago of the book ‘Unsafe at Any Speed’ that shook the Auto World, paving the way for regulation in automotive safety.

Cyber-security software could protect devices, but it opened up the question around how to protect the people who were using them? With the amount of freedom available and so many opportunities for harm, the gaps were starting to show.

 

The rise of harmful content

The internet is vast and the types of content a user exposes themselves to is unpredictable. Illegal and harmful material was almost commonplace amongst the online world with criminal activity such as scamming also taking place on a daily basis. With social media becoming the worldwide phenomenon, users were now interacting a lot more with each other which opened up more people to harmful encounters such as ‘trolling’, abuse and the sharing of offensive or illegal material.

At the heart of it was the concern that children and young people were also experiencing this type of harm, putting many in danger of being exposed to illegal and harmful content. SWGfL started their online safety journey by ensuring schools could have a safer internet connection with the appropriate filtering and monitoring in place, protecting them against material such as terrorist content and child abuse. Pioneering as it was, this was only the beginning.

There was an ongoing focus around what tech firms were doing to protect their users online. It was fast becoming a concern that online companies were not doing enough to prevent and respond to certain types of online harm. A lot of the time, media worthy incidents were happening which then put pressure on companies to do more. It was very much an adapt mentality; change was needed in order to effectively tackle what was being seen.

Many online platforms began to update and expand their terms of service to highlight their stance against offensive or illegal content. As part of this, many social media platforms included reporting buttons for users to report an account or a piece of content that was violating community standards whilst bringing in new measures such as moderation teams for reviewing.

Highlighted by the Byron Review and Bailey Reviewparental controls and security settings also became more prevalent, encouraging parents and carers to take more of an active role in their children’s online activity. Despite this, it was still very much apparent that more work needed to be done. Questions were raised around the defining of harmful or illegal material as well as how platforms were keeping children and young people safe with appropriate age verification checks and data protection where necessary.

Our work on the helplines, particularly Report Harmful Content and Revenge Porn Helpline showed that online safety was still a growing concern. Reports showed victims of harmful content as well as intimate image abuse were growing each year, meaning users were still navigating an unpredictable and often harmful online space.

 

The start of addressing online safety law

In April 2019, the Government introduced the Online Harms White paper. It was the start of what the Government saw as potential new measures to introduce the first online safety laws for the UK. Included was the prospect of a regulator being appointed to ensure standards were being met as well as ensuring tech firms were abiding by a ‘duty of care’ to protect their users. This also included tackling harms such as ‘inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and children accessing inappropriate material’. We wrote an in-depth response to this paper.

After a consultation period, the Draft Online Safety Bill was released in May 2021. Ofcom was identified as the regulator for the bill, being able to fine companies who did not comply whilst new measures were brought forward to include additions to protect freedom of expression and measures to tackle online scams. Indeed, during her speech, the Queen said the UK would ‘lead the way in online safety’.

In 2021, many social media platforms introduced new features and additions to work towards protecting children and young people online. This included wellbeing tools, filtering features and new privacy settings for younger accounts. The Information Commissioner’s Office also enforced the Age appropriate design code on tech firms, ensuring that young people’s data is managed correctly and their rights are effectively adhered to. The code was brought forward to support tech firms in making the necessary changes to their platforms.

Following a joint committee report on the Draft Online Safety Bill, more recommendations were put forward. Some of these included more clarity around illegal content and increased legal duties around age verifications for sites that host pornography as well as making cyber flashing illegal. The review process was coming to an end and 2022 would be the time for taking it further.

 

Online safety bill

It has taken a long time to get where we are today. In a world that is built on laws and regulation, online safety is something that is essential, ever-changing and needs to be consistently prioritised. We have seen harm online for many years, and until now, the process of ensuring people are protected by a lawful stance has been missing. This bill will bring forward a lot of areas we have been looking to tackle and will hopefully provide the clarity and reassurance users need around keeping themselves safe online.


Originally posted here

More thought leadership

 

Comments are closed.