Keeping Bias From Creeping Into Code

Written by Ansgar Koene, Senior Research Fellow, University of Nottingham

Many people perceive software as being free from bias, but it’s not. In a TEDx Talk, Joy Buolamwini, a researcher at the MIT Research Lab and an African-American, discusses how she dons a white mask because her face is often not detected by generic facial-recognition software used by robotics programs. In mathematician Cathy O’Neil’s award-winning book Weapons of Math Destruction, she exposes how machine-learning systems that are used to score teachers and students, sort résumés, and predict crime patterns are discriminatory.

“We’ve been trained to believe that humans are not neutral but assume computers just run through a series of processes so there is no reason to suspect the programs are biased, even though they’ve been programmed by people,” IEEE Member Ansgar Koene says.

The Institute asked Koene, a senior research fellow at the University of Nottingham’s Horizon Digital Economy Research Institute, in England, how biases wind up in software, what’s being done to prevent that, and how bias could negatively impact the development of AI.

What does bias in code mean?

In these discussions, bias in algorithms refers to societally unacceptable processing decisions that disadvantage people or groups on grounds for which there is no legitimate justification within the context of the task. This can be software decisions that are inconsistent with legislation concerning certain protected characteristics such as age, race, gender, and sexual orientation. But it also includes groups not explicitly protected by legislation but whose well-being would otherwise be diminished, such as those with non-typical living patterns, like people working the night shift.

How are biases being introduced into the programs and systems we use?

Typically, through data sets used to test the code. Most programmers use data sets that are the easiest and least expensive to acquire. These can inherently have biases already built in from existing data that is skewed. For example, stereotypes are being perpetuated that could impact career choices. Stock images of physicians statistically are of white males because these have been uploaded the most. Based on these images, the algorithm incorrectly correlates that profession with white males. Therefore, individuals of another race or females who are searching for information about the medical profession are presented with pictures of predominately white males, thereby possibly causing them to choose another profession.

It’s similar to a problem found in psychology literature in which test subjects are primarily WEIRD—an acronym for Western-educated, industrialized, rich, and democratic. Therefore, ideas about how human psychology works are skewed to match this demographic group. In a sense, we’re seeing a transfer of that same problem into the computer science and engineering areas.

What are the consequences?

Ultimately these biases lead to the exclusion of services that could help people. These include qualifying for a credit card, a loan, or government assistance.

Why are biases especially concerning for AI applications?

An AI system is built to run on its own without much human assistance. This means it will be easier for a bias to become part of a self-reinforcing loop. An example of this is predictive policing in which the algorithm makes a prediction about an area that needs more surveillance. Because there are more police in that area, more criminal activity will be reported. Therefore, the new data set generated from this information will show more criminal activity. Through this feedback loop, it will predict this area has higher crime and therefore becomes a self-fulling prophesy. But when humans are involved they would be more likely to question the results.

Another concern is that social values change over time or through legislation, so the code would have to change. Because AI systems have embedded code, it will be difficult and expensive to change these automated systems when societal attitudes change.

What are some of the challenges in trying to eliminate biases?

To a certain extent, systems are supposed to be biased. I would hope the outputs I get from a search engine are skewed to results that best match the search term I enter. However, problems occur when the decision criteria that the algorithm is optimizing for are not appropriate to the context for which the system is used. Programmers need to know the information in the data sets that were used to debug the system, and who will be using the algorithm. A common practice is reusing prior code for other purposes. When that happens, the developer needs to rethink whether the algorithms are still appropriate.

What is being done to prevent flawed algorithms?

Last September the U.K.’s Engineering and Physical Sciences Research Council funded the UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy project, which involves the universities of Nottingham, Oxford, and Edinburgh. It looks at the user’s experience of algorithm-driven Internet services and the process of algorithm design to ensure trust and transparency have been built into these services.

Professional associations, like the British Computer Society and the Association of Computing Machinery, have also come out with statements. In January ACM issued seven principles for algorithmic transparency and accountability.

What is IEEE doing?

It’s going beyond a brief statement of principles and is digging deeper than other organizations. That’s where the Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems—an IEEE Standards Association Industry Connections activity—comes in. The initiative has produced the 250-page “Ethically Aligned Design: A Vision for Prioritizing Human Well-being With Artificial Intelligence and Autonomous Systems” document to give guidance on, for example, what algorithmic transparency means. It covers methodologies to guide ethical research and design that upholds human values outlined in the U.N. Universal Declaration of Human Rights.

In support of this global initiative, I chair the IEEE P7003 Standard for Algorithmic Bias Considerations working group. It is creating specific methodologies to help programmers assert how they worked to address and eliminate issues of negative bias in the creation of their algorithms. The P7003 group is just one of 11 P7000 series working groups that are translating the principles of the global initiative into practical tools for AI and autonomous systems developers.


This article was originally published here and was reposted with permission.

Comments are closed.