5 ethical questions to ask when building your AI

Written by Kate Baucherel, Founder, Galia Digital

BCS, The Chartered Institute for IT, has called on the Prime Minister to make ethics a priority at the upcoming AI safety summit. As AI explodes into the public consciousness, ethics has to be front and centre – and proactive.

We can’t snooze on the job. Around 2015 I heard DeepMind founder Demis Hassabis interviewed about AlphaGo. When asked about the dangers of unethical development, he suggested that it wasn’t an immediate problem as very few people could build an AI. In 2017 I was already writing about the risks of compromise in AI ethics. Ethics go hand in hand with development, and it’s incredible how fast the world has changed.

The fabulous Timnit Gebru, former co-lead of Google’s AI ethics team, spoke at Inventures Canada in June. She asks five simple questions.

 

  1. Do you build it?

This needs to be asked whenever a new technology rears its tempting head. It’s all the more important when dealing with AI thanks to a noticeable over-estimation of its current capabilities. What are you planning to do? Are you likely to crash into issues around data use consent, cultural sensitivities, or personal privacy? And could a different or simpler solution do the job better?

 

  1. How do you build it?

AI depends entirely on carefully tagged data, and lots of it. Providing this involves, according to Gebru, “millions of people scraping data, building neural networks, and labeling data.” There are a multitude of different skills under the hood, too. Before we bundled everything under the AI umbrella, we talked about NLP (Natural Language Processing), imaging, algorithms. Shortcuts are not an option.

 

  1. How do you test it?

If you test your new AI on a data set adjacent to your training data, it’s going to perform beautifully. You’ve already given it all the answers. It’s time to play hard-ball. Disaggregating data enables intersectionality in processing. This delivers powerful insights, and can reveal  hidden problems within the algorithm and underlying data such as unintended bias. The term ‘intersectionality’ comes from research into a 1976 case against General Motors concerning discrimination against black women when hiring. “Black jobs were available to Black men, and female jobs were available to white women. However, Black women were not employed in a similar manner.” The intersectionality of race and gender reveals discrimination, where the original case simply concluded that there was sufficient diversity of hiring.

 

  1. How do you deploy it?

We have an inbuilt trust of the machine. Computer says yes! Great, but why? We use critical thinking when talking to humans, but we have a tendency to believe the machine. There are stories of drivers blindly following the SatNav down pedestrian streets, across dangerous bridges, and even to the wrong country in Europe, arriving in Rome, Germany, instead of Rome, Italy. In June 2023,  a legal firm was sanctioned for submitting a ChatGPT-generated legal brief that included six fictitious case citations. Transparency of algorithms and decision making, and education on the limitations of AI, should be front and centre of deployment.

 

  1. What are the unintended harms?

Who is being damaged by the incautious deployment of AI? The scraping and tagging of data, and the moderation of data sources to remove the extremes of abuse and toxic opinion, are human tasks. They exact a mental toll on low paid moderators who are exposed to the worst of the internet. And what about the data? We have a wealth of data in the world, the volume doubling every two years, and it reflects all of our changing attitudes over time. Not only can the decision making data reinforce old stereotypes, but as Caroline Criado Perez highlights in her book Invisible Women, unless data is  disaggregated, it discriminates.

We are entering a new age of hype over AI. The pace of change is accelerating, and the capabilities of AI are only going to expand. It’s up to us to apply strong ethics to development, avoid shortcuts, and harness this tool for the good of all. What do you think is the most important of these guidelines for today’s developers?


Originally published by Kate Baucherel www.galiadigital.com. Kate is a speaker, author and consultant specialising in Web3 technologies including blockchain, cryptocurrency and AI.

Read More AI & Data

Comments are closed.