Data saves lives
March 2020
Developers of smart speaker systems for healthcare consumers must address challenges like AI bias and human-centered application design.
“I was so tired, I started talking to Alexa.”
I can’t tell you how many times I’ve heard people refer to Alexa as something more than a digital assistant this past year. More people are talking to and asking questions of voice-activated digital assistants and cognitive-powered colleagues than ever before. Nearly 19 million homes in the U.S. have a smart speaker. In four years, Juniper Research predicts more than half of all American homes will have and use a smart speaker. That’s a smart speaker in more than 70 million U.S. households by 2022.
Already, Alexa has more than 1,000 healthcare-related “skills,” which allow users to make queries, to which the bot responds. These skills let consumers ask about pharmaceutical-company-sponsored prescription medications, yoga, illnesses and much more.
Before we begin asking these devices to diagnose an ailment or provide treatment suggestions, however, some advancements will be needed. In particular, all answers may not be created equal, particularly when it comes to how they may be influenced by those building the software. Secondly, these applications need to be designed in a way to be truly focused on the healthcare consumer.
Head Games
Using natural language processing, smart speakers and artificial intelligence (AI), digital assistants can understand much of what we say and respond. Over time, they will get smarter, and their capabilities and the uses for smart speakers will increase dramatically.
The problem is, it’s well-known that societal biases, both intended and unintended, can creep into AI systems. Bias can emanate from humans themselves and from the historical data used to train the algorithm. Consider, for example, algorithms used by universities to assess admissions. If the training data reflects bias from previous admissions procedures, like gender discrimination, these biases must be corrected, or they’ll persist. Bias can also stem from self-learning AI systems that create new AI systems, resulting in biases that multiply over subsequent generations of computer code.
Compounding the bias problem is that although humans create AI, we don’t always understand how it arrives at specific answers. This lack of transparency makes it difficult to not only eliminate bias but also to instill trust.
As we’ve said in a recent report and webinar, ethics will become particularly important as AI becomes more ubiquitous, and as AI systems increasingly learn from one another, not just from the inputs that humans provide. (Click here for a summary of the webinar.)
Bias Control
To eliminate or at least minimize AI bias, we need to start before the algorithm is built, with the inputs, themselves. This requires human input and control, as well as ongoing human supervision. As we’ve explained, this is not as arcane a process as one might think. The concept is familiar to parents who provide feedback and guidance to raise their children to be good members of society, and there are well-understood tools and frameworks from the world of human sciences that can be used to instill ethics into the design and operation of AI.”
The need to control for bias in the AI world isn’t much different from how medicine has been practiced for years. Healthcare providers are influenced by their biases, as well, and it’s the healthcare consumer’s responsibility (and in her best interest) to understand how a course of treatment may affect her.
Further, from healthcare’s point of view, bias may have nothing to do with race, gender, age or any number of other identifiers that make us all different. Instead, it may be the way healthcare consumers are perceived: Are we patients? Sufferers? Survivors? Consumers? Or some combination?
Those supplying the answers to smart speakers would do well to view their customers as everyday people simply looking for information. As Sebastian Jespersen, who writes frequently on the relationship between brands and people, points out, few of us will want to know about products during our first healthcare encounter with a smart speaker. Rather we’re more interested in learning about a specific illness or injury as we make a query. Digital healthcare solutions, he says, should be designed with a focus not on a product, nor on a person who’s suffering, but on the consumer as a human being.
Human-Centered Solutions
In addition to the problem of bias, to make a significant impact on healthcare delivery, smart speakers must do more, more quickly, than what we’d expect from a human-based encounter.
For example, one hospital uses smart speakers in patient rooms to expedite answers to common questions. But does it really work in a new way? Or is the smart speaker just a different way of getting the same old answers? Patients can ask for a nurse, but the smart speaker still can’t tell the patient how long the wait will be. Or they can ask “what is my diet,” and the response will be a “bland diet.” But is this really the wording most patients would use, or would they say, “What can I eat today?” And do they want to know what type of diet to follow, or specific foods and meals they can safely eat? Both the answer and the question are constructed from the healthcare provider’s point of view, not the patient’s. The system would be better off being more empathetic toward what people really need at that moment in time.
With the number of smart speakers worldwide expected to grow – and retailers slashing prices to push acceptance – there’s no doubt a smart speaker will become your BFF sooner or later.
But as is the case with many emerging technologies, we’re not quite there yet. Developers of these systems must address challenges like AI bias, providing answers that are customer-focused and, finally, designing applications that are truly focused on the healthcare consumer.
Originally published here.