Mind the Gap — Why Easing the AI Transformation Gap is Critical

A transformation gap is a fissure that exists between the ability of technology and the human capability to apply and use it as intended. The biggest transformation gaps we tend to see are between industries with incompatible levels of funding, training and/or communication. For technology to succeed, it needs adopters and it needs advocates. For AI technology to proliferate into healthcare, providers need to buy-in, but expecting them to do so without providing clear communication, training, and support is myopic.

The moral of the story is that technology is only as good as it is usable and trustworthy. If we can’t figure out how to implement and use our developing tech, what good is it to us? This transformation gap is particularly present in medicine, where the artificial intelligence revolution is fully on.

A 2020 study published in the IFIP Advances in Information and Communication Technology revealed that there is a huge, “intrinsic gap between AI and healthcare in regards to how (clinical significance, practicality) and why (commercial deployment, safety/security).” The study looked primarily at the gap between Computer Aided Diagnosis (CAD) and clinician and diagnostician eagerness to adopt it.

Through many methodological surveys, the study revealed that the biggest hang up with adopting CAD came from the technology's lack of explanation of how it reached its diagnostic conclusions. Minimal explanation, according to the study, makes persuading both clinicians and patients difficult, and relies on the clinician's intervention to provide an intuitive explanation.

Indeed, the downside of traditional AI is that it lacks true transparency. Tech pioneers can grow hoarse explaining how AI logic rules work, but many times the AI itself can’t explain how it came to one singular conclusion or result. Without clarity on how the tech itself reaches a result, care providers can feel left in the dark. Imagine a setting where multiple radiologists are in a room examining a CT scan, they can ask each other questions and use their collective brain trust to come to a logical conclusion.

“Did you notice 'X'?” 

“What about this possibility of 'Y'?” 

These types of conversations help establish trust and build confidence among care professionals, but the inherent nature of AI means those conversations are lost. One study suggests that because algorithms are generally designed to select a single, likely diagnosis, they can sometimes produce suboptimal results for patients with multiple, concurrent disorders. If CAD is only producing one result, it means it might be glazing over additional possibilities that may not necessarily be lost on humans if they were the ones examining the imaging and providing the diagnosis.

There is lots of emerging research being done about explainable AI (XAI) as a way to increase trust and transparency, but until that 100% XAI is here, there must be a calculated approach to getting healthcare professionals to buy-in to artificial intelligence.

Communication

Ultimately, successful adoption of artificial intelligence in healthcare hinges on the industry’s ability to communicate clearly about how the technology works. This umbrella approach is the only way to ensure widespread approval of AI. Furthermore, the implementation of AI in healthcare is new, and to make sure providers feel comfortable as the tech develops further is going to take some leg work on the front end. 

Saying an AI product works without providing insight into how or why does nothing to ease hesitations. Developers must be patient and seek clear communication with their prospective users in order to build trust in their products. No one knows the value of communication more than healthcare providers, so meeting them on that level is a critical element of fostering comfort with emerging technology.

In practice, this communication may take many different forms. Seminars, training, onboarding teams, user support, and customer satisfaction efforts all may work to ensure AI users feel good about adopting new tech. 

Because while AI itself may not be able to 100% explain its every decision, individual companies can. And they need to be transparent about how their technology works, what rules it uses, and why it can be trusted. 

It’s our responsibility to nurture that trust and cultivate those conversations.

Related stories

Blog

Enhancing Multilingual Patient Visits with AI

AI is helping bridge language barriers in healthcare with real-time translation and multilingual support for better patient outcomes and care delivery.
Blog

Ambient AI on the Air: Recent Podcast Highlights

DeepScribe’s leaders discuss ambient AI's impact on clinician well-being and patient care on two top podcasts, exploring real-world applications and future potential.
Blog

Ambient AI for independent cardiology groups: Bringing DeepScribe to CardioOne practices

Discover how CardioOne and DeepScribe's partnership brings specialty-specific ambient AI to independent cardiologists, boosting efficiency and patient care.

Realize the full potential of Healthcare AI with DeepScribe

Explore how DeepScribe’s customizable ambient AI platform can help you save time, improve patient care, and maximize revenue.