Challenging the use of AI to reinforce stereotypes
November 2017
As more of the things we use get smarter, explaining how products change over time or how to give people options for recourse when a non-deterministic decision has been made are going to become fundamental to the work of designers.
Our work at IF is increasingly taking us into areas of machine intelligence and machine learning. Here are five themes that are emerging for us:
Some of the ways we make decisions understandable by people might be quite simple, even mundane – the right words in the right place with the right visual emphasis, a button in the right place to let someone object. Food packaging, energy ratings and road safety have all been made legible by hard work and iterative design. In short, the sort of thing designers do.
Training data gets talked about a lot, but designers will also have to work more with things like version history, software tests, UI history, and verifiable data audits. This is something Caroline Sinders wrote about this earlier this year: “The product you are building uses a specific kind of algorithm and how that algorithm responds to a specific data set is a design effect”.
It needs to be clear what a service is doing so that people can object when it does something it shouldn’t, or makes ‘bad’ decisions. There are lots of reasons why this might happen, from incomplete data to bias in the training data, but what needs to be consistent is that people can raise an alarm. That’s going to be important for maintaining people’s trust, and it’s the direction legislation like GDPR is moving.
Lots of the applications we are seeing are quite individualistic – help me type faster, find something to watch on tv.
Collective action – things we can only do together or with the help of an organisation – needs to be part of the answer too. How might consumer groups, unions or medical charities help the people they represent know that they can trust a service that uses machine intelligence, how might they help people come together to spot problems? There are some interesting, emerging service design questions to tackle here. There are also more systemic challenges: the Californian Ideology isn’t the only way of looking at the world. Incorporating machine learning into design education is critical, so these skills are accessible to organisations outside of the big tech companies.
I touched on this in our submission to the Science and Technology Committee of the UK Parliament earlier this year. How do we setup our regulators to be able to audit regular code and machine learning so that society knows it is doing what it is supposed to? What new skills and tools does a parliamentary committee need to investigate when things go wrong? The weak signals from the car industry are not good, but there are maybe some patterns to copy from finance or gambling machine regulation where there is already some auditing of code.
This article was originally published here and was reposted with permission.