10 ways AI is a force for good

Sophia the robot at the AI for Good Global Summit 2017

Written by Geoff Mulgan, CEO of Nesta

The AI boom shows no signs of letting up, reaching fever pitch not just in the US and China, but also in the UK, Canada, UAE and elsewhere.

Most of the focus has been on economic growth (like China’s ambition for a $150bn industry). But there has also been a flurry of interest in AI for good; a proliferation of events, organisations being set up (like AINowOpenAI and AI4ALL in the US and the Ada Lovelace Institute or ElementAI’s new London office focusing on AI for good, here in the UK).

Some are real and some may turn out to be little more than hot air, or algorithmic greenwash. So what can we hope for?

40 per cent of the British public view AI as a threat equivalent to nuclear weapons

Through the half century history of AI, the direction of research and funding has been dominated by the military and intelligence. Over the last decade that has shifted with the now huge commercial investment by Amazon, Google, Alibaba, Tencent and others. But through this time there has been very little serious interest, and almost no funding, for AI as a tool for empowering citizens or consumers, and relatively little for AI for public benefit. This may now be changing.

A good reason for wanting to change things is the ambivalent state of public attitudes. Our recent poll showed that 40 per cent of the British public view AI as a threat equivalent to nuclear weaponsand a majority want international regulation. Nuclear power and GM crops ran into brick walls because of their failure to convince much of the public that the benefits outweighed the risks. It’s not hard to imagine AI running into similar problems.

So, here I suggest 10 main areas where AI for good is developing, hopefully for real.

1. New institutions charged with bending AI towards better and away from worse

As many anticipated, we are now beginning to see some more serious attention to institution-building to help organise public policy around AI, including the UK’s Centre for Data Ethics and Innovation and the Office for AI. There is a huge amount to be done to get the balances right, to decide how to counter bias and harms of all kinds. But at least it’s now part of some peoples’ jobs to act rather than just comment.

2. Direct empowerment

Or projects that directly enhance people’s power relative to big government or big firms. This is the most obvious space that AI could be filling. DONOTPAY is an example, set up by a 20 year-old student, Joshua Browder, and used by hundreds of thousands to challenge their speeding and parking tickets. But it’s notable how few examples of this kind there are, and there is still almost nothing in the labour market to empower workers and balance the huge power of AI to empower employers. Direct empowerment currently looks like a blind spot for the AI for good community.

3. Projects designed for broader social good

AI can be mobilised to help farmers, improve diagnostics or personalise learning, and there are now many examples on the market of AI applications for each of these. Health diagnostics has had most of the investment (including from IBM Watson and GoogleDeepMind), but there are also many imaginative examples of AI for good in other fields, like algorithms to help refugees find jobs, or automated diagnostics. Many are promising; but overall they are remarkably small scale so far (with diagnostics the exception), and many still struggle to secure finance (like the refugee example). The cost of good AI researchers is a key factor. This leads some to conclude that the only practical option in the near term is to get the big corporates to commit a share of their researchers’ time to AI for good projects.

4. Public engagement in debate about the uses of AI

For most of the last half century the public were worried observers of AI developments. This has changed a lot in recent years, especially in the UK, and ranges from Nesta’s work on AI decision making in health, to the Royal Society’s use of citizens panels and juries. In most countries, however, the debates are very much at elite level. This is particularly striking in the US where a lot of money is going into AI for good initiatives linked to universities but very little is being done to engage the wider public (perhaps a symptom of a broader issue in US politics). There are a few exceptions, like HK Lab, part of the Danish white collar union HK, who proactively mobilise workers councils to discuss new technology applications and consider potential job redesigns. Globally, of course, much more is being invested to serve the interests of rich than poor consumers (for example, through recommendation engines) even though AI may well have far more potential impact in health and agriculture in the developing world.

5. More diversity in AI itself

Many have pointed out the remarkable skews in the AI field itself. The programmers and entrepreneurs tend to be white, male and privileged and this is inevitably reflected in the sort of issues that are seen as priorities, as well as the embedding of bias (like the criminal justice algorithms that made predictions about women prisoners based on data from male prisoners). A clutch of initiatives are now addressing this, from Black in AI and Women in Machine Learning to projects on feminist data sets to counter male bias. Meanwhile, many countries are now ramping up training in AI, ML and data science – but it’s unclear how many are prioritising diversity. Diversity matters at every stage, from sources of data, through to design and use.

6. Open data rules and data empowerment initiatives

Initiatives that will, in time, encourage the use of AI to strengthen consumer choice and power. This is a crucial area of development. The DECODE project – which Nesta helps organise across Europe – aims to put citizens in control of their own data, and the open banking initiative – which Nesta runs part of via the Open Up Challenge – creates new markets for products and services using AI to empower small businesses and, in time, individual customers. Although these aren’t labelled as AI initiatives they have more potential to shift things than most. They bring to the surface the crucial tension between reaping the full benefits of machine learning – which depends on combining large datasets – and the imperatives of privacy.

7. Political Economy of AI

We’re just beginning to see the first stirrings of a more serious debate about the political economy of AI. Some very crude analyses of jobs effects are still appearing regularly, even though the patterns are bound to be complex (as some jobs are replaced, others augmented, others created), and a lot needs to be done to nudge those trends in healthy directions. But the more basic issue is this: if truly big productivity gains are to be achieved for the economy and society as a whole, then how should the winners compensate the losers? Joseph Stiglitz’s recent paper is a good example of the debate that’s needed, looking at questions of tax and how to capture windfall gains for holders of complementary assets. It’s very thin on detail but confirms that jumping to Universal Basic Income as the answer will soon seem a very inadequate response.

8. AI for democracy

AI will, in time, have a huge impact on democracy. So far, its main effects have been negative – through algorithms spreading fake news. But there are some examples of how AI can help a public, or group, better understand the dynamics of opinion. Pol.is does this, and is used by vTaiwan (and Nesta). More extreme proposals suggest that AI can simply deduce the public mood and save the trouble of elections (the often brilliant Cesar Hidalgo made a proposalon this that verges on parody). The key of course will be to combine AI and CI (Collective Intelligence) in smart ways.

9. Citizen and media activism vs ethics committees

We’re beginning to see the same citizen activism directly challenging abuses, particularly around bias. In Europe, these are helped by the GDPR rules that, in theory, require algorithms to be transparent or explicable. Here, there is an interesting contrast between the various corporate moves to use ethics committees to achieve legitimacy and the impact of external campaigners. A quick summary would be that the various ethics committees – notably Facebook’s – have achieved very little, while activism, and investigative journalism, have achieved quite a lot. Probably the only useful thing the members of Facebook’s committee could have done would have been a mass resignation. If nothing else, difficult questions are now being asked of the ‘data ethics’ experts who spent a lot of time discussing theoretical questions (like the ‘trolley problem’) and little on the very real dilemmas of the present. In the same way, initiatives like OpenAI, that were given the benefit of the doubt, are now being viewed much more sceptically (as indicated by Elon Musk’s exit).

10. Codes and standards

As with any fundamental technology, standards matter a lot and become a key source of power. Some are technical standards – and China’s success in securing the chairing role of the key ISO committee is significant. Other codes are ethical, like Eddie Copeland’s proposed principles for public sector uses of AI. Hopefully we will soon see some of these being adopted and acted on.

I’m sure there are many other examples. There is much more activity in this space now than a year ago. But the big question remains whether any of this is sufficient, or proportionate given the scale of investment in top-down variants of AI that primarily empower big business or governments. For now, people-powered AI is very much the exception not the rule.


This article was originally published here.

 

More Thought Leadership

Comments are closed.