Why we need to stop talking about “What AI Will Do”

Written by Angela Bradbury, COO, HelpFirst

Technological advances have sparked new debates about AI replacing human professionals. Reporting on this often misrepresents AI – what it is, and how people interact with it in real life.

Someone recently sent me a New York Times’ Hard Fork podcast episode and asked me what I thought about it. It featured a study that found ChatGPT “defeated doctors” at diagnosing diseases. It made me a bit angry. Headlines like this grab attention, but are misleading.

 

It’s not AI vs humans – It’s humans using tools

“Is AI better at diagnosing conditions than doctors?” – that’s like asking “is a hammer better at putting nails in walls than human hands”. A hammer won’t do anything on its own. A human without tools would struggle. The result of a human using a hammer might be excellent, but depends on their skill in using the tool.

The study wasn’t testing ‘AI versus doctors’. It was comparing three different scenarios:

  1. Doctors using conventional resources (like medical reference websites)
  2. Doctors using both the above and ChatGPT, but without prompt training
  3. The output of ChatGPT from a standardised prompt, developed by researchers

All of these scenarios are examples of humans using tools. There are many more scenarios that weren’t tested, or reported on. The third scenario performed best, but this doesn’t mean AI is “defeating” doctors. It means we need to be careful about how to integrate such tools into medical practice. 

 

Real-world examples: Building tools for humans

At HelpFirst, we’re developing AI tools to support caseworkers with admin tasks. We are humans, building tools that other humans will use in their work with people in vulnerable situations. It involves developers, caseworkers, supervisors and service users. Each has their own set of motivations, biases, needs and experiences. We have a responsibility to make ourselves aware of these as much as possible – without that, we can’t design useful and ethical systems.

My last role was at the Cooperative AI Foundation, a research funder. Their mission is to make AI systems collaborative, not competitive (think training them on games like Diplomacy rather than chess). I do believe this is valuable research. But we should remember that this is still about humans creating tools. Humans will use the tools, impacting other humans. These collaborative tools might be used to tackle joint problems – like food supply and housing. They might also be used to defraud people faster, and wage more destructive wars.

 

What this means for public services

So what does this mean for healthcare and other caring services? Improving, say, diagnostic accuracy, is a service design challenge. Perhaps clinical and AI experts will co-design self-service diagnostic tools. These could provide a ‘first pass’ diagnosis and signpost to a specialist service. Maybe GPs will go through training programmes on how to use AI diagnostic tools. These could reduce referral times and streamline care pathways. I predict a combination of these and more.

But even with excellent AI tools, we’ll still need human-to-human care. Patients need to be able to trust their care providers. They need to get tailored support to adhere to their treatment plans. They need to feel like they matter. 

Every healthcare system needs to improve patient care while managing resources. The solution doesn’t lie in the question of “humans versus AI”. It’s in finding the best combination of human expertise, relationships and technology.

 

We need to talk about human choices

As we continue developing AI systems, we need to stop talking about AI as if it acts on its own. Language matters, because it can change our perception, even if only by a small amount every day. Conversations about “what AI will do” often anthropomorphise and demonise AI. This whips up fear – for example, about job losses. It also absolves those who design and deploy the tools from responsibility.

We need to keep talking about the human decisions that shape these tools: Who’s building them? Who’s using them? Who are they used with or on? What training and support are users getting? What are the power dynamics at play?

Remember: AI doesn’t do anything. Humans do things, sometimes with AI tools. Humans make choices in development, implementation, access, upskilling and more. Those choices are what matter.

So please, the next time you hear someone talking about “what AI will do”, ask instead: “what are the humans doing?”


Read More AI for good

Comments are closed.