AI with an ethical conscience: Harnessing the wisdom of healthcare professionals to deliver intelligent care and support planning for all patients

Medical Man and woman looking at a device

Written by Jonathan Abraham, CEO and Co-founder, Healum

At Healum we have always believed that the access to proactive, personalised healthcare should be a fundamental right for every person, whatever their cultural background, location or means. 

The NHS Long-Term Plan’s ambition is to make personalised care and support planning ‘business as usual’ for 2.5 million people with long-term health conditions. NHS England’s definition of personalised care is people having choice and control over the way their care is planned and delivered, based on ‘what matters’ to them and their individual strengths, and needs. Proactive personalised care is all about enabling people to understand the set of health choices that are available to them, and empowering them to make those choices. This is true whether we are talking about medication, medical services, community-based services, or eating healthy food and leading a healthy lifestyle. 

In the UK there are growing health inequalities in the uptake of personalised care and support planning. The Health Inequalities and Personalised Care Report highlighted a widening gap in this area amongst people of white ethnicity and other ethnic groups. Factors such as income, housing, environment, transport, education and work impact the ability and motivation of people to make informed choices about their care and to manage their health. It’s hard to think about eating healthily if one is suffering from mental ill health because of poor employment or housing.

 

Enabling people with long-term conditions to achieve the health outcomes that matter to them

At Healum, our focus is to make it easier for people living with long-term conditions to manage their health. We do this by improving access to the daily support they need to make healthy choices and to plan their care. This is part of a shared decision-making process with their clinicians. Our personalised care and support planning software and connected patient facing apps, enable healthcare professionals to provide patients with more help and support at the moments that matter. By using our software in appointments, they can co-create a digital plan of care and support. Patients can then access this digital care plan from any device, at any moment.

Clinicians clearly make a series of judgements about what the optimal set of medication, advice, educational content, community services, goals, actions and resources are available to their patient. The wisdom behind their judgements is based on a career full of empirical observations; from treating other patients, shared learning from peers, and the evidence-based practices that they have adopted. The challenge is that there are a lot of medical and self-care options available. Each patient is completely different and healthcare professionals don’t have the time to assess all of the options relevant to the patient in front of them.

 

The role of machine learning in assisting healthcare professionals to create personalised plans of care and support

In 2018 we were inspired by the Academy of Medical Sciences’ Report which called for AI based research into the strategies needed to maximise the benefits of treatment among patients with multimorbidity. It explored whether machine learning tools could be developed to assist healthcare professionals to deliver comprehensive, integrated care to these patients. It also outlined the need for patient and carer priorities to be better captured and incorporated into care plans for patients.

With the support of a research and development grant from Innovate UK’s Digital Health Technology Catalyst, we set about developing a system to do just that. It would enable healthcare professionals to determine the optimal set of medical and non-medical choices which could then be assembled into a personalised plan of care and support, regardless of an individual’s race, gender, medical history, DNA and socioeconomic circumstances. We wanted to make it quick and efficient for healthcare professionals to access a set of recommendations for the patient sitting in front of them during a consultation. Despite its complexity, machine learning models offered us the opportunity to do this. It meant we could display the recommendations in a probabilistic system that ranks them, thus making care and support planning quicker, simpler and more relevant, but more importantly in a way that was ethical and effective. 

 

Creating an effective and ethical machine learning system

We learned early on that providing an intelligent system for clinicians in creating care and support plans will only work if the design and delivery adheres to the principles of trust, consent, diversity, efficacy and safety. 

We began by asking healthcare professionals which sources of information they trusted most and who they learn from when recommending care and support options for patients with long-term conditions. Of the 100+ healthcare professionals we spoke to, the biggest source of trusted intelligence came from the wisdom of their peers and their patients. When we first started our R&D work, there was no effective way to crowd-source a set of second opinions from clinical peers for a given set of patient characteristics. There was also no effective way to analyse, triangulate and present a set of optimal recommendations. For us this led to a very simple design concept. 

What if we could provide healthcare professionals with a set of trusted care and support plan recommendations based on the interventions and outcomes that their clinical peers had observed when treating similar patients? 

This key concept underpinned our R&D work in using crowd-sourced peer recommendations to determine the optimal set of medical and non-medical choices for patients with type 2 diabetes. Trust in the recommendations presented is linked to trust in the wisdom of other clinical peers who are using the software – a true live learning environment!

 

The challenges in overcoming algorithmic bias

Algorithmic bias is an issue when using machine learning techniques for anything relating to patient care. Overcoming this algorithmic bias is paramount if we are ever going to use machine learning to present the optimal set of health choices for any patient. Although we built our machine learning models to incorporate ethnicity and socioeconomic background data, we faced significant challenges in training those models on appropriate datasets. 

Firstly, live-learning data would not be large enough to train for ethnicity, income or region so we had to find historical datasets to train and validate our machine learning models. Secondly, the coding of these datasets is inconsistent and limited in its scope. For example, most historical research databases do not enable us to break down people of South Asian backgrounds into Indian, Pakistani and Bangladeshi. Thirdly there is an issue of consent and governance around the ethical use of research datasets for AI development. We found that some private research companies were operating in a grey area, and selling anonymised extracted patient information. That was not what patients and healthcare professionals told us they wanted and went against our values.

Instead, we chose to only work with research databases that have rigorous ethical standards, such as the Royal College of General Practitioners’ Research Surveillance Environment, governed by the Primary Health Sciences department at Oxford University. Their data is anonymised and can only be used under a strict protocol that adheres to the standards of their Scientific Ethics Committee. Our hope is that the learnings that we generate from this research over the next few years will provide healthcare professionals with a set of effective recommendations to include in care and support plans, that overcome issues of algorithmic bias.

 

Our approach to the next 5 years

NHSX’s recent draft AI strategy outlined that we need to all play our part in ensuring that openness, fairness, safety, and efficacy are part of the AI technologies that we bring to market.  Our approach is to ensure that the wisdom of healthcare professionals plays a part in training any machine learning algorithm. We believe it is immensely important to provide personalised care and support planning to people from all diverse communities, that is free from algorithmic bias. We need to include patients in our approach to AI research in order to understand how to handle consent and communication of the benefits and risk. This can be achieved by rigorously following the NICE Evidence Standards Framework for digital interventions, NHSx ethical codes of practice for the development of AI technologies, and the recently published Transparency Standards for Algorithms.

Going into 2022, Healum will be opening up its live learning network to healthcare professional stakeholders across primary, secondary, community and social settings. We want to incorporate the wisdom of more healthcare professionals and patients in a safe and ethical way, so that we can improve the quality and access to personalised care, and support choices for more people with long-term conditions.


More thought leadership

 

Comments are closed.