Summary:
Medical artificial intelligence can perform with expert-level accuracy and deliver cost-effective care at scale.
Medical artificial intelligence can perform with expert-level accuracy and deliver cost-effective care at scale. IBM’s Watson diagnoses heart disease better than cardiologists do. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps now detect skin cancer with expert accuracy. Algorithms identify eye diseases just as well as specialized physicians.
Some forecast that medical AI will pervade 90% of hospitals and replace as much as 80% of what doctors currently do . But for that to come about, the healthcare system will have to overcome patients’ distrust of AI. There is a lot of work to be done: Our recent research shows that patients are reluctant to use healthcare provided by medical artificial intelligence even when it outperforms human doctors, believing that their medical needs cannot be adequately addressed by algorithms.
With a colleague, we explored patients’ receptivity to medical AI in a series of experiments. The results showed a strong reluctance across procedures ranging from a skin-cancer screening to pacemaker-implant surgery. We found that when healthcare was provided by AI rather than by a human care provider, patients were less likely to utilize the service and wanted to pay less for it. They also preferred having a human provider perform the service even if that meant there would be a greater risk of an inaccurate diagnosis or a surgical complication.
The resistance to medical AI seems to stem from a belief that it does not take into account one’s idiosyncratic characteristics and circumstances. People view themselves as unique , and we find that this belief includes their health. By contrast, people see medical care delivered by AI providers as inflexible and standardized — suited to treat an average patient but inadequate for a particular individual.
There are several steps care providers can take to overcome patients’ resistance to medical Al. Providers can start by assuaging concerns about being treated as an average or a statistic by increasing the perceived personalization of the care delivered by AI. For AI-based healthcare services (such as chatbot diagnoses and app-based treatments), providers could emphasize the information gathered about patients to generate their unique profile, including their lifestyle, family history, genetic profiles and details about their environment. They could also include verbal cues — such as “based on your unique profile” — that suggest personalization.
In addition, healthcare organizations could make a special effort to spread the word that AI providers do deliver personal and individualized healthcare — for example, by sharing evidence with the media. Having a physician confirm the recommendation of an AI provider can also make people more receptive to AI-based care: We found that people are comfortable utilizing medical AI if a physician remains in charge of the ultimate decision.
AI-based healthcare technologies are being developed and deployed at an impressive rate. But harnessing their full potential will require that we first overcome patients’ skepticism of having an algorithm, rather than a person, making decisions about their care.
Copyright 2019 Harvard Business School Publishing Corp. Distributed by The New York Times Syndicate.
Topics
Technology Integration
Related
Artificial Intelligence in Healthcare: Pros, Cons, and Future ExpectationsHow the Next Generation of Managers Is Using Gen AIManaging a BullyRecommended Reading
Operations and Policy
Artificial Intelligence in Healthcare: Pros, Cons, and Future Expectations
Operations and Policy
How the Next Generation of Managers Is Using Gen AI
Operations and Policy
Managing a Bully
Operations and Policy
Why Diverse Teams Are Smarter
Operations and Policy
Change Management in the Healthcare Practice