Private: Discussing data science
November 2017
In the early 1800s, Ada Lovelace talked about data as ‘a new, a vast, and a powerful language…developed for the future use of analysis, in which to wield its truths…for the purposes of mankind.’ In recent weeks, however, headlines about the behaviour of Cambridge Analytica have led many people to question whether data is a force for good. The data firm harvested the personal data of more than 50 million Facebook users, and allegedly used the information to build systems that targeted voters with personalised political advertisements. Users deleting Facebook found that its apps have been collecting details of every phone call and text message they made, sent or received. With details of this data collection hidden in lengthy terms and conditions that most of us never read, it’s not surprising that many people feel that their privacy has been violated.
In the modern world, we create data almost every hour of the day. Your mobile knows when you pick it up. Mapping apps know where you have been, where you’re going, and when you travel. Social networks know your likes and dislikes, and who your friends are. Supermarkets know what you usually buy through your loyalty cards. Online stores know what you’ve browsed, and will remind you of the things you left in your basket later.
This intimate knowledge of our lives can give companies serious power. They can target advertisements at us – in some cases, potentially even influencing our democratic choices. And they may be able to discriminate against us – identifying people who are less price sensitive, or who pose a higher risk, and charging those groups more.
Greater access to data does carry risks. But these aren’t inevitable. With the right regulation and consumer protection, we could build an economy where data is a force for good.
Most of us experience optimism bias – we understand that bad things happen in life, but we usually think we are less likely to experience them than other people. This means we don’t tend to plan for difficult eventualities, like relationship breakdown or redundancy. And when these difficult things happen, it can be very hard to ask for help – particularly when you’re experiencing problems with money, which can affect your self-esteem and sense of self-worth. If you’re experiencing a mental health problem, which might leave you feeling hopeless or struggling to find motivation, reaching out is even more difficult.
Data can help to solve both of these problems – allowing people to find help more quickly, before problems get worse. Examples could include:
But this can only happen if the public are confident about the way that organisations use their data.
Data protection regulations are about to go through their first major update in a generation when the General Data Protection Act comes into force in late May. This will give people the right to ask for data about them to be deleted, means firms have to be more transparent about what they’re doing with data, and introduce stricter penalties for misuse of data.
Hopefully this will reassure people that they do have some fundamental rights about what happens with information about them and their lives. But the pace of change is such that, really, we need to be having a broader conversation. That’s why I was pleased to see the Nuffield Foundation announcing the creation of the Ada Lovelace Institute to examine the ethical and social questions that arise from the use of personal data, algorithms and artificial intelligence. Together with the establishment of a government Office for AI, I hope this will propel conversations about what it’s fair for firms to do with data, and how we can use it for good, to the forefront of policymakers’ minds. With the amount of data out there growing by the day, proper policy can’t come soon enough.
This article was originally published on Money and Mental Health.