Time to trade AI power for responsibility
August 2018
Much is said about the harm digital technology does to humanity, with calls for a more ‘ethical’ approach. This approach carries a risk of unintended consequence in itself, restricting the ability of innovation to make the world a better place.
Imagine that a paperclip factory becomes the first place in the world to have access to artificial intelligence smarter than humans, thanks to a clever R&D team.
They create an Ai whose single, sole purpose is to maximise the production of paperclips. Sounds like an innocuous challenge, but fast forward 30 years and this Ai has managed to consume all of the earth’s resources – including the atoms that make up human bodies – in order to create more paperclips.
It’s not all bad, though: there are a plenty of paperclips to go around.
This is the paperclip maximiser thought experiment and it’s a provocation on how the development of new technologies may have harmful – or even existential – consequences for humanity. It may sound far-fetched and unrealistic – most thought experiments do – but in this case we have already seen the first example of it brought to life. In fact, you’ve probably interacted with the Ai in question.
Social media networks’ Ais are tasked not with maximising paperclips but with absorbing as much of our time and attention as possible – so it can be sold to advertisers.
To do this, their Ais have deftly learned that focusing our social media feeds on clickbait, fake news, and extreme viewpoints keeps people swiping and tapping more than in-depth, fact-checked content that challenges our worldview. It’s responded by pushing more of the same content to hundreds of millions, and the real-world results on democracy and cohesion have been evident.
This is leading to a mainstream debate – in politics, public life and behind the scenes at tech companies – about ‘tech ethics’. It’s a hot topic. There are books about it, podcasts about it – DigitalAgenda’s summit called Power & Responsibility is built around it.
Given that these conversations have been sparked by the unintended negative consequences some tech platforms are having – on our mental health, democracy, and on equality – it’s no surprise that tech ethics has primary focused on how harm can be limited or offset.
Yet in doing so this debate, ironically, poses a risk of doing an unintended harm itself.
Sci-fi writer and Futurist Arthur C. Clarke famously said in his three laws that “any sufficiently advanced technology is indistinguishable from magic.”
Nearly 50 years on, in an age of artificial intelligence, virtual reality and constant connectivity, this seems truer than ever.
So we have been given, effectively, magic. What questions should we ask? Should we only concern ourselves with how we limit the harm of it?
Or should we ask how we also use it to make the world better? To cure the sick, lift people out of poverty, or improve people’s mental health?
In other words, how do we make the magic of technical advancement make the world better rather than make it worse at an accelerating rate?
This is the most urgent question for humanity. In times of unprecedented global challenges, harnessing technological progress to create humanitarian progress is an opportunity we can’t afford to miss.
Yet debates about regulations and codes of conduct are taking more of the airtime and leading on the policy agenda. Tech ethics runs the risk of making us feel like we’ve dealt with the issue by putting a few extra regulations or codes of practice in place around the status quo whilst we miss a greater opportunity for change.
In some cases, attempts by tech companies to ‘clean up’ their platforms is shutting out not just bad actors but also limiting ‘good actors’ like charities and tech-for-good innovators from reaching an audience or innovating on those platforms.
There’s no shortage of innovation in ethical standards. The Institute of Electrical and Electronics Engineers are one body that set global safety standards for electronics – like IEEE 63-1928 – about covering wires with rubber so they don’t electrocute people.
Today, they’re drafting new standards for the new dangers of electronics – such as P7009, about ensuring effective failsafe mechanisms so Ai doesn’t cause harm when it fails or is no longer needed. These are valuable – and much needed – standards, but we should also ask ourselves: where are the underlying levers for change, before the problem gets created and hits the need regulation and legislation?
These regulations may have been helpful in the paperclip thought experiment but the sole focus of the Ai on one outcome, without any concern for others, is the root cause. Tunnel vision doesn’t just apply to tech, but it applies to the tech companies themselves.
In tech, an industry driven by venture capital, investors own and control companies to create financial returns. Their influence ensures scale is driven above all, pushing companies to “go fast and break stuff” as they race each other for market dominance in an emerging sector.
It’s been incredibly successful at driving forward innovation – innovations that could be harnessed to do good – but not so good at creating ventures that focus on something more than generating returns for their founders and investors.
In recent years impact investment has emerged as a way to rebalance venture capital towards a more social purpose. It’s worked well at solving challenges that fit the mould of business – shifting solar panels at scale, for example – and less so at more nuanced challenges – like youth mental health. It’s not created any unicorn-sized solutions to our unicorn-sized global problems yet – although it is early days.
This is why innovative, truly ethical tech ventures require innovation in the way they’re capitalised first. Venture capital is an amazing innovation in deploying capital to beat risk – we need a similar breakthrough.
One that gives social outcomes parity with financial returns. There are glimmers of hope on this. Social impact bonds (SIBs) provide returns for investors aligned to the delivery of social outcomes, although difficulties in giving cast iron guarantees of outcomes combined with the lack of confidence in the numbers generated in impact reports continue to hold SIBs back from the mainstream yet.
Dare we allow ourselves to think instead about what could supercede or complement venture capital, rather than just how we should restrict venture capital’s monocular focus on financial value?
While tech ethics is leading the way in debating how tech could do less harm, the burgeoning tech-for-good movement is still consigned to the fringes, given limited resources and status as a separate branch of technology, suggesting that ‘mainstream tech’ and ‘tech for good’ are two separate endeavours.
We need to unify the two and take tech ethics back to its roots: in the philosophy of ethics. A philosophy concerned not just with how to do no harm, but one that starts with the question of how we can do the most good.
Take Immanuel Kant, one of the central figures of modern ethics, and his categorical imperative. It asserts that in order to be ethical we must never to treat others merely as a means to an end but always, additionally, as ends in themselves. Or not to treat users merely as a means to ad revenues, perhaps.
This is how we should be considering ethics in tech. Not as we mean ethics in ‘ethical’ coffee or ‘responsible’ gambling. But instead putting the human, and the improvement of life for that human, as the focus instead of trying to do as little harm as possible in the process of profiting from them.
To do so we should take head of another of Clark’s Three Laws, the second: “The only way of discovering the limits of the possible is to venture a little way past them into the impossible”.
We owe it to humanity to urgently discover the limits of the possible here, not just to talk about limiting the harm of what’s possible. To take the abilities that tech provides us and use them mindfully to do good. Now that, for many, would be true magic.
This article was originally published here.
Matt is a speaker at DigitalAgenda’s Power & Responsibility Summit, taking place at London’s British Library on Thursday 4 October. More information including how to secure your ticket.