Carte Blanche and the AI Black Box: Why We Need to Be Talking About Trust and Safety in AI

In an AI landscape that is expanding at lightning speed, it's becoming increasingly evident that not every business will ride the wave of viral adoption or capitalize on this boom to achieve long-term success. Knowing that, what is the recipe for success in this climate? At DeepScribe, how can we ensure that we thrive in a suddenly saturated market? More importantly, how can we do so safely?

As we thoughtfully build clinical AI of the future, these are the questions we’re asking and the discussions we’re having, and one way or another, they always circle back to our emphasis on trust and safety. It’s this emphasis that helps us form a broader framework that reminds us that regardless of DeepScribe’s effectiveness, how accurate our notes are, or how satisfied our partners are, the presence of safeguarding systems is fundamental to clinical buy-in and long term viability. Without these systems that protect patients and allow clinicians to use our solution safely, our success is worthless. In light of recent AI expansion in healthcare, we believe this emphasis, this commitment, is more important than ever. Moreover, we’re concerned that the industry as a whole might be overlooking its criticality.

There are a few primary topics that fuel this worry: Regulation, Responsibility, Quality Assurance and Interpretability. Or, really, lack thereof. 

The absence of centralized regulation in the AI industry allows for relatively unchecked expansion, and, within this era of rapid advancement, poses serious risks to trust and safety. Without clear guidelines or oversight, it becomes crucial for companies to establish their own ethical frameworks and compliance standards. In healthcare, however, the accountability for AI-generated mistakes falls primarily on clinicians rather than AI companies, meaning that while it’s advantageous for these companies to prioritize clinical accuracy, the consequences for making severe errors is limited to market perception. In other words, third-party companies are not held responsible if they make a mistake that a clinician doesn’t catch. Absolved of blame. 

Additionally, many emerging AI solutions lack quality assurance measures and interpretability, which are key factors in ensuring reliable AI outputs and avoiding malpractice. Without regulatory mandates enforcing the implementation of human quality assurance or high interpretability, AI as a whole has carte blanche to operate without any transparency or accountability. In high-stakes domains like healthcare, where flawed outputs can have life-or-death consequences, this black box approach is reckless and ultimately unfair to clinicians who might have to defend them in court. 

The reality is no central entity requires the construction of these core pillars of trust and safety. Unregulated AI that isn’t reviewed closely for errors, doesn't explain how it arrives at conclusions, and isn't held responsible when it does make mistakes poses massive risks, especially in healthcare. It's why many thought leaders are advocating for AI regulation and even pauses in development as the world gains a better understanding of how to safely build and implement this technology.

At DeepScribe, we've always asked ourselves: How do we leverage NLP and AI to automate medical documentation without exposing clinicians and patients to extreme risk? In the early stages, long before we raised Series A or partnered with large care organizations like Covenant Health, our collaboration with Vanta allowed us to achieve and ultimately maintain complete SOC2 and HIPAA compliance throughout all corners of our business. Combined with data de-identification processes, military-grade encryption, and regular security audits, we worked to implement tangible internal standards that echoed our external commitment to trust and safety.

But we also needed to deliver on our promise of automating clinical documentation and then stand behind its accuracy. In the beginning, this meant leveraging certified medical scribes to audit AI outputs on the back-end and review notes for accuracy. This quality assurance step ensured we were able to deliver the best possible product to our customers while simultaneously training our models on clean data. At the expense of our efficiency, we prioritized this "human-in-the-loop" process because it allowed us to scale without sacrificing note quality or accuracy, or exposing our clinical partners to undue risk. By using industry medical scribe training standards combined with data labeling training, we empowered our humans in the loop and our AI to deliver on our promise.

This was the DeepScribe foundation. Over time, our transcription engines got better, our NLP models improved, and our data labeling process became even more efficient and user-friendly, which helped unlock exponential growth. As DeepScribe evolved, our AI went from writing less than 20% of a note to writing more than 90%. Yet even with industry leading accuracy, our humans in the loop are still involved in the process and still contribute to the ultimate output in some capacity - sometimes correcting medical mistakes or removing duplicate information, and in some instances just giving it a read-through and stamp of approval. No matter how accurate DeepScribe is, we firmly believe that human review is fundamental to the responsible deployment of these models. It embodies our commitment to trust and safety and is core to our mission.

Yet, as AI expands rapidly amidst this boom and increasingly automated solutions come to market, there’s concern that the industry might be prone to overlook the foundational importance of trust and safety. When regulation, responsibility, quality assurance, and interpretability seem more important than ever before, AI as a whole seems to be moving in the opposite direction — one that is increasingly automated and opaque.

At DeepScribe, our goal is to be a leader to the contrary. We understand the potential risks associated with AI in healthcare and are committed to mitigating them through responsible and transparent practices. A core tenet of this commitment is accomplished through clinical collaboration. We operate with an open line of communication with our clinical partners and our availability to them allows for active engagement in discussions regarding bias avoidance, fairness, model improvement, fine-tuning, and other topics that otherwise support and continue our trust and safety efforts. The goal here is that by bringing the end-user into the process, we can continue to promote responsible implementation of the DeepScribe solution, even as our AI improves and automation increases.

DeepScribe has always advocated for the importance of developing AI alongside humans. As we transition to a more automated future and our AI becomes less reliant on direct human intervention, we can continue to promote responsible deployment by embracing interpretability and transparency through collaborative policies. By peeling back the curtain and bringing end-users even deeper into the inner workings of our solution, we create a safety net for increased automation. Only by thoughtfully implementing trust and safety can we fully embrace the evolving landscape of AI and position ourselves for a future that prioritizes reliability, trustworthiness, and positive clinical outcomes.

Related stories

Blog

Enhancing Medical Documentation with DeepScribe's Quality Management Systems

DeepScribe enhances medical documentation through comprehensive quality management systems, including Golden Notes, Development Rates, and Qualitative Evaluations, ensuring accuracy and reliability in AI-generated documentation for healthcare professionals.
Blog

When Physician Depression Goes Undetected: How AI Can Help Those Who Suffer Unknowingly

Check how artificial intelligence in healthcare could aid physicians not only patient care but in caring for themselves as well.
Blog

You've Been Served: Medical Documentation Downfalls

Medical errors are often due to poor or incomplete patient exam documentation. Learn how AI dramatically improves patient outcomes and lowers provider risk.

Realize the full potential of Healthcare AI with DeepScribe

Explore how DeepScribe’s customizable ambient AI platform can help you save time, improve patient care, and maximize revenue.