Communications, confidence and trust – Moving towards safe autonomous systems

White Robot

Written by Professor John McDermid OBE FREng , Director, Assuring Autonomy International Programme

While the potential benefits of AI and ML are clear, the introduction of systems that use these technologies cannot be rushed. Safety is paramount and must go hand-in-hand with the development of the system. If you’re developing, buying, or regulating autonomous technologies you need assurance of the system’s safety. Q. What do we mean by safety assurance and how can we get it? A. Communication; Confidence; and Trust

Autonomy, artificial intelligence (AI), machine learning (ML): buzz words that crop up in the news, social media, and conversation every day.

The societal benefits of such technologies are evident now more than ever — quicker diagnosis of illness and disease, contactless delivery from a self-driving pod, at least in some parts of the world — and perhaps autonomous taxis in a few years.

They can also bring huge benefits to organisations:

  • Quicker processing of data.
  • Smarter case management.
  • Improved efficiency.

 

Communication

A system that communicates the appropriate information to you so that you understand how it is making decisions.

There are two parts to communication. Firstly, communicating with users (that might be your staff if the system is one you’re buying or rolling out in your organisation) to understand what they expect from the system they will be using. Understanding what your staff or other users expect of the system, what it will and won’t do and how it will support their work, will ensure that the system is acceptable and desirable.

Secondly, communication from the system is needed to be able to explain what it has done. The decisions made by ML and AI algorithms are often hidden — to know that a system is safe you need to understand what it has done and what it has achieved. This requires a way for the system to communicate and explain what decisions have been made, what actions have been taken, and why. These explanations may need to be:

  • Prior to use — to support safe deployment
  • Contemporaneous with the decision-making process
  • Retrospective — to enable investigation of incidents and accidentsConfidence

 

Confidence

A system that is built using tools, techniques and data that give you confidence in its decisions, actions, abilities, safety and limitations.

You need confidence that the system you are developing, buying or regulating will behave as expected — that uncertainty and risk are as low as possible and that decisions taken will be ‘good’ decisions. This is in part about understanding what the requirements of the system are in order to ensure it is developed to meet those requirements, and in part about demonstrating (or having demonstrated to you) that the ML elements of the system can perform their tasks safely (i.e. with the risk of human harm as low as is reasonably practicable).

Confidence requires knowing that the assessment of safety is based on good, solid evidence and data and that the risks are well understood and quantified, where possible. This gives you confidence in the system itself, but understanding that the system is used as intended, data is kept up to date, and so on is also critical to ensuring confidence.

 

Trust

A structured way to understand and evaluate the system and assess whether its safety assurance is sufficient and can be trusted.

You need to be able to trust that the system will meet the expectations that society (or your staff or customers) have of the system. It is about accepting that the risks associated with it are appropriate, and are being mitigated as much as possible and that you understand the limits of the technology.

The evidence that the system or its developers provide to give you confidence in its safety must be trusted. You need to know that the data it has been trained on is appropriate, e.g. fairly represents all classes of users, and that the system design has considered different types of users.

 

Assurance: Greater than the sum of its parts

Assurance is about bringing the communication, confidence and trust together in a structured and evidenced way.

The work we’re doing through the Assuring Autonomy International Programme is advancing the safety assurance of autonomous systems through collaboration with academia, industry and regulators from across the world and across different domains.

We have a large body of research into best practice and processes for gathering the evidence needed to prove these new complex technologies are safe. For example, we have just published a methodology for the assurance of machine learning components in autonomous systems (AMLAS). This is a key part of our wider research strategy which will provide guidance on key areas of the assurance of autonomous systems.


Originally posted here

More thought leadership

Comments are closed.