Introducing a duty of care for social media

Girl taking a photo for Snapchat

Written by Maeve Walsh, Carnegie UK Trust Associate

As the Government prepares to publish its Internet Safety Strategy White Paper early next year, there is no shortage of advice in development to help Ministers decide on their way forward. Parliamentary inquiries on the impact of technology on our lives, being undertaken by the Science and Technology Committee, DCMS Select Committee and Lords Communications Committee, are due to report in the coming months. The new Health Secretary has tasked the Chief Medical Officer with producing guidelines on screentime for children, to add to her ongoing review of the evidence on the impact of social media on young people’s mental health. And a number of recent reports from organisations such as doteveryone, Tony Blair Global Institute and WebRoots Democracy have put forward different regulatory proposals to rein in the power of tech giants, reduce online harms and deliver greater protection for individuals.

There is no doubt that some form of regulation will form part of the Government’s proposals. But the challenge for policymakers and legislators in this fast-moving area is clear: as more and more examples of threats and harms to individuals from social media use emerge, we are still a long way from amassing the type of robust, authoritative evidence of causation traditionally required as a basis for regulatory action.

But we can’t ignore some of the correlations. For example, the explosion of social media use amongst young people has corresponded with evidence over a similar period of an increase in self-harm and suicidal behaviour:

  • Between 2011 and 2017, a 68% rise in rates of self-harm was recorded among girls aged 13 to 16; and
  • A US study in 2017 found suicide rates for teens rose steadily between 2010 and 2015 after they had declined for nearly two decades.

So, is this sufficient evidence to act? While it is all too easy to look at the challenges of technology and social media use as new, uncharted territory, there are historic parallels and the work of William Perrin and Professor Lorna Woods for Carnegie UK Trust on Harm Reduction in Social Media draws on these. 

After the many public health and science controversies of the 1990s, the UK government’s Interdepartmental Liaison Group on Risk Assessment (ILGRA) published their thinking on the use of the “precautionary principle” for UK decision makers grappling with complex, novel issues.

‘The precautionary principle should be applied when, on the basis of the best scientific advice available in the time-frame for decision-making: there is good reason to believe that harmful effects may occur to human, animal or plant health, or to the environment; and the level of scientific uncertainty about the consequences or likelihoods is such that risk cannot be assessed with sufficient confidence to inform decision-making.’

The ILGRA document advises regulators on how to act when early evidence of harm to the public is apparent, but before unequivocal scientific advice has had time to emerge, with a particular focus on novel harms. The ILGRA’s work is still current and hosted by the Health and Safety Executive (HSE), underpinning risk-based regulation in new and innovative areas. The HSE is also the upholder of a much more established regulatory approach: the duty of care principle – set out in the Health and Safety at Work Act 1974 – which holds owners of public spaces in the physical realm responsible for the health and safety of those who use those spaces – whether employees, customers, visitors, etc.

If you apply the same approach to social media platforms – that they are forms of public spaces – then the people who go to such platforms should be protected from reasonably foreseeable harm as they would expect in any public place, such as an office, bar or theme park. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences, primarily through a regulatory scheme but also with the option of personal legal redress.

Applying this approach to social media would work as follows: new legislation would set out the duty of care and identify the key harms Parliament wants the regulator to focus on; for example, the ‘stirring up of hatred’ offences, national security, harms to children, emotional harm, harms to the judicial and electoral processes and economic harms. A regulator (the proposal suggests Ofcom is best placed to take this on) would set out a harm reduction cycle involving civil society as well as companies at each consultative step.  Companies would be required to measure and survey harm, produce plans to address these harms for public consultation and agreement with the regulator, then implement the plans. If the cycle does not reduce harms or the companies do not co-operate then sanctions could be deployed.

Simple, broadly-based and largely future-proof – applying the “duty of care” to social media would therefore enable the prevention of harm to be expressed in terms of outcome, not specifics of process and enable a preventative approach to reduce adverse impacts on users, rather than a reactive one that requires a high bar for evidence to justify a focus on compensation and redress.

More detail can be found in William Perrin and Lorna Woods’s series of blogs and in evidence submitted to a number of the ongoing Parliamentary inquiries: here.


Find More thought leadership

Comments are closed.