Are ethics ruining the AI party?

Russell Haworth, AI

Written by Russell Haworth, CEO Nominet

Artificial intelligence (AI) is one of the most incredible advancements of our time and has now reached a point at which our society needs to prepare for application on a wide scale. This involves asking the big ethical questions, clipping the wings of AI in the short-term to ensure it stays within moral frameworks we have yet to design.

This is most pertinent for AI that is equipped with machine learning; algorithms are employed to allow a computer to adapt over time in response to stimuli – or ‘learn’ from its interactions. There are supervised and unsupervised approaches to machine learning, with the latter presenting some potential complications. If we can’t supervise we are unable to understand how decisions are reached nor the ‘thought’ process behind them. How do we ensure the route and choices made by AI are ethical? 

Ethical decisions for AI

Human beings make decision based on the context, their past and the cultural norms of the society in which they live. AI has no such resources to draw upon. A machine must be programmed not to always make decisions solely based on a mathematical logic but follow an ethical, moral code that human beings have hardwired. A robot needs to know that if a person has run out of meat for dinner, it is unacceptable to cook the cat.

Hard-wiring a complicated ethical code into a machine is a serious challenge for the software developers of today, especially as this decision-making process could make them liable for the consequences. The issue has been brought up often in discussions around autonomous vehicles; the trolley problem of today. What will, and should, a car do in a situation when only one of two lives can be saved – pedestrian or driver? Who makes that decision and who is responsible for the consequences? 

Experts have suggested that to remain ethical AI needs to be transparent and trustworthy, working with humans and not as a replacement. AI that takes on cognitive work need to be robust against manipulation, argue researchers from the Machine Intelligence Research Institute. There needs to be a clear proof of the systems and workings of the AI to facilitate an investigation when mistakes are made. If we can’t identify why AI did something, we can’t make sure it doesn’t repeat it.

Cooperation and context

Equally important is cooperation between the parties involved at every step of an AI machine’s design, creation and application. Ethics needs to be considered at the point of creation, entwined in the workings rather than applied in retrospect. It would be easy to imagine the polarisation of software developers or AI manufacturers and ethics committees or risk management experts. As John C Havens, author of Heartifical Intelligence stresses, we “need to inform the AI manufacturing process with programming based on the codification of our deeply held beliefs”. This will be complicated by the commercial nature of AI development and the swift advancements in technology, not to mention the challenge of ‘codifying’ a set of beliefs that all involved can agree on, free from prejudices and bias. Would this vary by country? By industry?

There are also ethical issues to consider beyond the workings of AI and into a wider context: the impact on society and the individual. Unemployment could rise, which psychologists warn could impact mental health, and decisions would need to be made over who benefits from the work of AI and revenue produced. Who would pay the tax required to support a non-working human population? It is likely that a reliance on AI will change human behaviours and interactions – what consequences could there be? We also need to tackle security issues, bias, and potentially even the right of the robots with ‘cognition’.

See opportunities, not limitations

For those who are forging ahead in technology and evolving the capabilities of AI at an extraordinary rate, ethical considerations could be viewed as inhibitors. Ultimately, they are enablers. Technology is only useful to our society if it works with us and our existing systems. Without trust and liability, robust regulations and checks, AI could veer from maliciously lethal to unproductive and ineffective, neither of which are helpful.

Ethicists will come centre stage in the coming years to facilitate the move towards wide scale adoption of AI and ensure automation works for humans and not against them, boosting productivity alongside people. It will be interesting to see how they progress together to facilitate a safe society-wide roll-out of life-changing tech.    

 

 

Comments are closed.