4 key considerations for ethical tech

Tree,Growing,On,Crystal,Globe.,Digital,Convergence,And,And,Technology

Written by Helena Ward, PR Strategist, Ethical Intelligence

Just as companies might require Software as a Service, Data as a Service or Content as a Service, companies are increasingly seeking out ‘Ethics as a Service’ – but what exactly is it?

‘Ethics as a Service’ is a term recently coined the academic paper ‘Ethics as a Service: a pragmatic operationalisation of AI Ethics’. In a nutshell, it’s the provision of ethical assistance, advice or guidance into AI development, use and design.

Morley and authors argue that despite limits to current ethical guidelines, and the failures of tools set out to implement them, these shortcomings can be overcome by grounding Ethics as a Service.

Recent developments in AI have sparked a proliferation of ethical principles and guidelines. But the mechanisms in place currently governing AI have proven insufficient in their protection of end-users. We have ethical guidelines, but there is a significant gap between these ethical principles and their operalisation in AI systems. Simply put, these principles aren’t being put into practice. So, what can we do? How can ethics be successfully embedded into AI design, implementation and use?

There are tools which try to bridge the gap between theory and practice, called translational tools. These try to help designers implement ethical principles into AI systems. But they fail to do so, as Morley frames it, current translational tools are either too strict, or too flexible.

 

Lost in translation

A translational tool is too strict when it operates in a ‘top-down’ manner, providing fixed guidelines on how principles should be enforced in practice. But we can’t just take a tool, enforce it within an algorithmic system, and expect an ethically robust outcome. Doing so would fail to recognise the situational judgement that ethics requires. We need to assess a system circumstance by circumstance, and what works for one system might be ineffective in another. Strict translational tools also encourage a ‘tick-box’ mentality, where practitioners think they can just complete ‘ethics’ at the beginning of the design process. But using a tool does not guarantee an ethical system; algorithmic systems aren’t static, they change constantly and will need to be regulated and reviewed as an ongoing process as their outcomes change – you’re only as ethical as your last decision.

On the flip side, a translational tool is too flexible when a tool doesn’t by itself offer sufficient practical guidance about what should be done. It might for example be able to identify a biased dataset, but offer little support on how to mitigate it. This lack of objective criteria also leaves these tools vulnerable to manipulation, where an AI practitioner is free to choose the tool that is most convenient for them, resulting in AI systems that are convenient for the company, but not necessarily ethical.

Translational tools try to take ethical principles and put them into practice, but current tools fail to do this. While strict translational tools give too much instruction, flexible tools don’t give enough. So how can we move from principle to practice? How can AI ethics be usefully operationalised for AI practitioners?

 

Who is responsible?

When thinking about AI ethics, we need to also think about who is responsible for the ethics of an AI system – should the company be solely responsible for its ethical impact? Or should  external bodies take responsibility for regulating company ethics?

Let’s go back to translational tools and figure this out. The strict translational tools fail to operationalise ethics because they give too much responsibility to external powers. What’s wrong with this is that external bodies are dictating ‘what goes’ in a company without fully understanding the company, its product, and the AI technologies they’re using. On the other hand, flexible translational tools give too much responsibility to the internal company. While they’ll definitely know their company best, complete internal responsibility raises worries over ethical manipulation, and this doesn’t have to be intentional – what’s called ‘founder’s bias’ might introduce ethical biases into a system unknowingly. For ethics at least, more heads are often better than one.

Our strict and flexible tools have shown us two things: 1. That too much external responsibility isn’t beneficial for ethics, and 2. That too much internal responsibility is likely to fail too. It looks like finding a balance between internal responsibility and external responsibility is the way forward. Finding this middle way will be crucial for the operalisation of ethics.

So who is responsible for a company’s ethics? Well, it looks like both internal and external bodies should be responsible; responsibility for a company’s ethics should be shared between internal employers, and external ethical auditors, both collaboratively working towards a more ethical outcome.

 

The Goldilocks level of abstraction

If we hope to utilise ethics effectively, in a way that’s useful for both AI practitioners, and protective of users, we need to find a middle way. A bridge between principle and practice that’s not too strict, nor too flexible, and which finds the right balance between internal and external responsibilities.

So what does this goldilocks level of abstraction look like? For starters, ethical outcomes will be established by multiple agents; internal employees will work together with external ethical auditors, and a community of ethical advisors to realise company intentions ethically. The responsibility for ethical AI will not lie solely with one or the other, but rather be shared between both internal, and external bodies. This collaborative effort between internal and external auditors will not just be in effect at the design stage, but rather continually throughout the process, with the ethics of a company being subject to timely reviews.

Just as a company conducts product reviews, utilising technical experts to detect technical bugs and resolve them, a company should utilise ethical experts during those same reviews to detect and resolve ethical bugs.

With internal employees and external ethical committees working together, ethical bugs can be effectively resolved. This collective partnership is, according to Floridi, the most effective way to bring ethics into practice within AI. Internal employees can guide with a full understanding of the inner workings of the company and their product, and ethical experts can bring a diverse set of opinions, aiding to clear internal biases, and ensure intentions are realised in an ethical manner. This middle way – Ethics as a Service – can provide companies with a full understanding of their product, it’s social implications, and ethical impact, with ethical experts working together with the company to ensure that ethics is at the forefront of AI design, implementation and use.

 

The link between principle and practice

We have seen that there is currently an issue with implementing ethics into AI; we have ethical principles, but they’re not being put into practice. Although we have translational tools which try to bridge this gap, they fail to do so by either being too strict, or too flexible. So, what do we do? If we hope to effectively utilise ethics, and produce genuinely ethical systems, we need to adopt Ethics as a Service. This middle way, collaborative effort is the most effective way to produce ethical AI systems. And with ethical experts working together with AI experts, company intentions can be realised in a way that is not harmful to those around them.

Turning to Ethics as a Service will not only bring a company successful ethical outcomes: it will bring a diverse set of opinions, a full understanding of the company’s product/s, a greater understanding of the companies intentions and vision, and contribute to building trust with end-users by ensuring that the company brings benefits, rather than harms to society.

There is a massive issue within AI ethics – that ethics isn’t being utilised effectively in our society, this leaves users of our technologies vulnerable to harm, well, now we have a solution – Ethics as a Service.


Originally posted here

Read More Net Zero

Comments are closed.