5 thoughts on design and AI

Design and AI

Written by Richard Pope, Chief Operating Officer at IF

As more of the things we use get smarter, explaining how products change over time or how to give people options for recourse when a non-deterministic decision has been made are going to become fundamental to the work of designers.

Our work at IF is increasingly taking us into areas of machine intelligence and machine learning. Here are five themes that are emerging for us:

1. Just because the technology feels magic, it doesn’t mean making it understandable requires magic.

Some of the ways we make decisions understandable by people might be quite simple, even mundane – the right words in the right place with the right visual emphasis, a button in the right place to let someone object. Food packaging, energy ratings and road safety have all been made legible by hard work and iterative design. In short, the sort of thing designers do.

2. Designers are going to need to get familiar with new materials to make things make sense to people.

Training data gets talked about a lot, but designers will also have to work more with things like version history, software tests, UI history, and verifiable data audits. This is something Caroline Sinders wrote about this earlier this year: “The product you are building uses a specific kind of algorithm and how that algorithm responds to a specific data set is a design effect”.

3. We need to make sure people have an option to object when something isn’t right.

It needs to be clear what a service is doing so that people can object when it does something it shouldn’t, or makes ‘bad’ decisions. There are lots of reasons why this might happen, from incomplete data to bias in the training data, but what needs to be consistent is that people can raise an alarm. That’s going to be important for maintaining people’s trust, and it’s the direction legislation like GDPR is moving.

4. We should not fall into the trap of assuming the way to make machine learning understandable should be purely individualistic

Lots of the applications we are seeing are quite individualistic – help me type faster, find something to watch on tv.

Collective action – things we can only do together or with the help of an organisation – needs to be part of the answer too. How might consumer groups, unions or medical charities help the people they represent know that they can trust a service that uses machine intelligence, how might they help people come together to spot problems? There are some interesting, emerging service design questions to tackle here. There are also more systemic challenges: the Californian Ideology isn’t the only way of looking at the world. Incorporating machine learning into design education is critical, so these skills are accessible to organisations outside of the big tech companies.

5. We also need to think about how we design regulators too.

I touched on this in our submission to the Science and Technology Committee of the UK Parliament earlier this year. How do we setup our regulators to be able to audit regular code and machine learning so that society knows it is doing what it is supposed to? What new skills and tools does a parliamentary committee need to investigate when things go wrong? The weak signals from the car industry are not good, but there are maybe some patterns to copy from finance or gambling machine regulation where there is already some auditing of code.


This article was originally published here and was reposted with permission.

 

More Thought Leadership

 

Comments are closed.