While artificial intelligence has had its fingerprints in healthcare for some years now, the recent AI boom has accelerated its proliferation into the sector and given rise to many new solutions. But in an industry that is historically slow to adopt new tech, why has AI suddenly leapfrogged into healthcare? Furthermore, what are the implications of adopting this new technology at such a high rate and what questions should care organizations be asking?


It's no secret that over the last few months applications of artificial intelligence have skyrocketed. Recent advancements in computing power, natural language processing, and deep learning algorithms have fueled the popularity of Large Language Model (LLM) systems like OpenAI's Chat GPT, Google Bard, and Microsoft Bing Chat. For seemingly the first time, AI is in our pockets; our homes; our classrooms. This sudden boost in accessibility has resulted in near-viral adoption, with platforms like Chat GPT reporting over 1.8 billion unique monthly visitors. And as user bases have evolved, so too have the perceptions of artificial intelligence and its place in the workplace and society at large.

Today, the adoption of AI into everyday life and business doesn't seem like the crack in the dam that leads to a dystopian iRobot reality. Instead, organizations across various industries are rushing to embrace AI as a way to automate routine tasks, improve services, and gain a competitive edge. Global spending on AI is projected to reach north of $300 billion by 2025, a 3x increase from last year.

In the healthcare sector specifically, this sudden shift in adoption readiness is a jarring 180 to many experts and practitioners. The industry's eagerness to embrace the potential benefits of AI, driven by the promise of streamlining operations, automating tasks, and improving patient outcomes, is in stark contrast to their historically slow adoption of new technology. This enthusiasm raises concerns. Not due to doubt that AI can deliver on its promise, but because this  AI boom is ripe with unproven, unregulated solutions. Without careful consideration and evaluation of these solutions, mass deployment of AI in healthcare poses serious risks to patient privacy, data security, and organizational health at large. How far do we let AI run before we reign it in? It's not just us asking these questions.

On May 14, OpenAI CEO Sam Altman testified before a U.S. Senate subcommittee and agreed with lawmakers' calls to regulate AI technology. Among the discussed was the potential for collaboration between government and tech, the formation of an agency to issue licenses for large-scale models, the implementation of state-sanctioned testing for AI models before public release, and anti-trust rules. Altman's testimony underscores the importance of addressing the potential risks associated with AI adoption across all industries, including healthcare. As such it begs a handful of important questions: What are we open to letting into our exam rooms and who are we comfortable giving access to patient data? Of the AI solutions that are already here, how are we determining what works and what doesn't? What's safe? What's proven?

Specifically in the AI medical documentation space, there are certain considerations that we encourage when evaluating potential solutions: Clinical model training, hallucination rates, accuracy, and security.

Clinical Model Training: Generative AI platforms require substantial training data to perform tasks effectively. Platforms such as Chat GPT, NVIDIA's StyleGAN, and Google's Magenta Project are examples of generative AI models that rely on extensive data (text, images, music) to produce accurate and meaningful outputs. Generative medical documentation solutions are no different. They require an extensive understanding of medical terminology, documentation standards, and natural language, and care organizations must adopt solutions with a proven track record. AI-powered medical documentation is a highly nuanced use case for AI, with high consequences for errors. Choosing a tool that isn't built for and trained on a vast database of this highly nuanced data is asking for trouble. This leads us to the next important category to consider.

Hallucination Rates: Hallucinations refer to the tendency of some AI systems to generate inaccurate outputs and frame them as fact. AI hallucinations can occur when the model makes assumptions based on patterns, even if the generated output is factually incorrect. With significantly higher stakes in medicine, AI-powered clinical documentation solutions must have a nonexistent hallucination rate. Newer, less-proven, less-trained solutions on the market tend to have higher hallucination rates.

Accuracy: Among AI medical documentation solutions, the most important accuracy metrics refer to the percentage of "medical entities" (bits of information from the visit deemed medically relevant) that are correctly captured and documented. A reliable solution should demonstrate the ability to capture relevant medical information consistently and precisely, without hallucinating. This requires extensive training (clinical models) and quality assurance mechanisms that create a positive feedback loop between AI and QA.

Data Security and Patient Privacy: The security of patient data is perhaps the most crucial element to consider when adopting AI medical documentation solutions. AI documentation solutions inherently process and store highly sensitive patient data so care organizations must prioritize stringent data security measures, strong data governance policies, and robust encryption methods, in addition to baseline HIPAA compliance. Additionally, access controls, user authentication, and audit trails should be in place to monitor and track data access, ensuring only authorized personnel can view and modify data.

As we stand on the precipice of widespread AI deployment — in healthcare and beyond — it's crucial that producers and consumers actively participate in the discussion and collectively commit to ethical (and equitable) implementation, transparency, and accountability. Responsible implementation in healthcare hinges on these commitments, and is essential to mitigating risk and ensuring positive outcomes for patients, clinicians, and care organizations alike.

Related stories


Enhancing Medical Documentation with DeepScribe's Quality Management Systems

DeepScribe enhances medical documentation through comprehensive quality management systems, including Golden Notes, Development Rates, and Qualitative Evaluations, ensuring accuracy and reliability in AI-generated documentation for healthcare professionals.

When Physician Depression Goes Undetected: How AI Can Help Those Who Suffer Unknowingly

Check how artificial intelligence in healthcare could aid physicians not only patient care but in caring for themselves as well.

You've Been Served: Medical Documentation Downfalls

Medical errors are often due to poor or incomplete patient exam documentation. Learn how AI dramatically improves patient outcomes and lowers provider risk.

Realize the full potential of Healthcare AI with DeepScribe

Explore how DeepScribe’s customizable ambient AI platform can help you save time, improve patient care, and maximize revenue.