Listen

Description

By Adam Turteltaub
The rise of generative AI has brought transformative potential to healthcare—from streamlining administrative tasks to supporting clinical decision-making. But alongside these benefits comes a growing concern: Shadow AI. Alex Tyrrell, Chief Technology Officer, Health at Wolters Kluwer explains in this podcast that this term refers to the use of unauthorized, unmonitored AI tools within organizations. In healthcare, where data privacy and patient safety are paramount, Shadow AI presents a unique and urgent challenge both now and in the future.

Healthcare professionals often turn to generative AI tools with good intentions—hoping to reduce documentation burdens, improve workflows, or gain insights from complex data. However, many of these tools are unproven large language models (LLMs) that operate as black boxes. They’re prone to hallucinations, lack transparency in decision-making, and may inadvertently expose Protected Health Information (PHI) to the open internet.

This isn’t just a theoretical risk. The use of public AI tools on personal devices or in clinical settings can lead to serious consequences, including:

Privacy violations
Legal and regulatory non-compliance
Patient harm due to inaccurate or misleading outputs

Despite these risks, many healthcare organizations lack visibility into how and when these tools are being used. According to recent data, only 18% of organizations have a formal policy governing the use of generative AI in the workplace, and just 20% require formal training for employees using these tools.

It’s important to recognize that most employees aren’t using Shadow AI to be reckless—they’re trying to solve real problems. The lack of clear guidance, approved tools, and education creates a vacuum that Shadow AI fills. Without a structured approach, organizations end up playing a game of whack-a-mole, reacting to issues rather than proactively managing them.

So, what can healthcare organizations do to address Shadow AI without stifling innovation?

Audit and Monitor Usage

Start with what you can control. For organization-issued devices, conduct periodic audits to identify unauthorized AI usage. While personal devices are harder to monitor, you can still gather feedback from employees about where they see value in generative AI. This helps surface use cases that can be addressed through approved tools and structured programs.

Procure Trusted AI Tools

Use procurement processes to source AI tools from vetted vendors. Look for solutions with:

Transparent decision-making processes
Clear documentation of training data sources
No use of patient data or other confidential information for model training

Avoid tools that lack explainability or accountability—especially those that cannot guarantee data privacy.

Establish Structured Governance

Governance isn’t just about rules—it’s about clarity and oversight. Develop a well-articulated framework that includes:

Defined roles and responsibilities for AI oversight
Risk assessment protocols
Integration with existing compliance and IT governance structures

Make sure AI governance is not siloed. Those managing AI tools should be at the table during strategic planning and implementation.

Educate and Engage

Education is the cornerstone of responsible AI use. Employees need to understand not just the risks, but also the right way to use AI tools. Offer formal training, create open forums for discussion, and build a culture of transparency. When people feel informed and supported, they’re more likely to choose safe, approved tools.

Protect PHI with Precision

In clinical workflows, PHI is often unavoidable. That’s why it’s critical to:

Deidentify patient data whenever possible
Ensure only authorized systems, processes, and personnel have access to PHI
Maintain up-to-date business associate agreements and data processing contracts

As you get closer to the bedside, the margin for error shrinks. Public devices and unlicensed LLMs should never be used in direct patient care.

The regulatory landscape around AI is evolving rapidly—especially at the state level and in the EU. Even if federal guidelines are still catching up, organizations must be proactive. Bake privacy by design into your AI strategy from the beginning. Treat compliance not as a burden, but as a strategic advantage that protects patients and enables innovation.

And be sure to listen to this podcast to learn more about the risks of shadow AI