December 9, 2025 | Keavy Murphy

6 min read

Secure AI Is the Only AI in Healthcare

By Keavy Murphy, VP, Information Security

How to Leverage AI, Securely

Artificial Intelligence (AI) is the most exciting and versatile technology available in the market today. It’s tech that offers almost limitless possibilities, and the healthcare industry is poised to fully take advantage of this innovation. Though there are risks associated with AI adoption, Net Health has seen many organizations leverage this emerging technology; the business value generated can be easily demonstrated. Fortunately, it is possible to enable AI in your own business in a secure, HIPAA-compliant, and data privacy focused manner.

We recommend that healthcare businesses (private practices, skilled nursing facilities, etc.) take advantage of AI, and we will explain how to do this in a way that is secure and aligned with the best practices of data privacy and HIPAA compliance.

1. Create a Roadmap

AI is buzzy, exciting, and innovative, though adopting it without a clear strategy only leads to expensive experimentation. And then, not only does that expensive experimentation fail to lead to real business value, but it also creates major security risks.

As you evaluate AI adoption in your business, ensure that you are documenting a roadmap for what you want to achieve. Having a clear strategy for what AI will be used for, how it will help the organization, and the business issues it’s being implemented to solve is critical. This is important because without a roadmap or plan, there is no governance. Governance is the foundation of cybersecurity and having it in place reduces security risk, which is why it is crucial to have in place as you start evaluating and then implementing AI within your healthcare business.

2. Remember the Critical Data You Have Been Entrusted with

PHI is the most sensitive and intimate data about an individual. It often relates to information that cannot be changed in relation to that person (unlike a password that can be rotated, or a credit card number that can be reissued). Therefore, it is of utmost importance to be cautious about how and where PHI is being used when it comes to AI.

Before inputting PHI into an AI-enabled tool or chatbot, consider the risk. Does this technology have HIPAA safeguards enabled? Would my patients be upset if they learned I was using their PHI in this manner, with AI technology?

3. Follow the Data Privacy Best Practice: “No Surprises”

In the data protection arena, we develop technology and make decisions based on the foundational idea that consent and communication are non-negotiable. No person likes to be surprised when they find out how and why their PHI is being used, processed, or stored. This information should be provided up front (this is why privacy policies exist and why HIPAA disclosures are the norm).

When it comes to AI, the “no surprises” principle of data privacy extends here – patients, residents and families should always know when and how AI is being used in their care. As a provider, it is important to be completely transparent about what the AI is doing and how it’s informing care decisions. Asking the patient for consent to use AI technology (for example, right before you use an ambient scribe tool) is a key step before you use AI at your business.

4. Check Your Work

Though AI is often expressed in the market as a panacea and a tool that can do absolutely everything for us, in every single use case, there actually are limitations. To ensure accuracy of responses, use AI in your healthcare setting for tasks, operations, and practices you know you could do yourself.

The best human-AI collaboration requires humans who could do the work themselves but choose strategic, careful assistance. Delegate tasks to AI that you already have knowledge of and know how to do.  That way, you can check the answers and outputs for accuracy, but your manual efforts and workload will be reduced.

5. Conduct Vendor Due Diligence

Ensure your AI procurement process is not just a check-box exercise. Often times, the vendor selection process is done at the eleventh hour, meaning the security due diligence is accelerated, rushed through, or not done at all. As a result, healthcare organizations onboard tools and software that may have massive security gaps or immature cybersecurity programs. Since AI is a new technology and much of its functionality is still evolving, it is more important than ever before to conduct thoughtful and careful vendor due diligence as you procure a new AI tool for your healthcare business.

Make the vendors you are evaluating prove to you that they do what they say they are doing when it relates to cybersecurity: request external audit reports, penetration test summaries and evidence of defense in depth cybersecurity controls.

In addition, pay careful attention to how transparent a healthcare technology vendor is about their AI development and decision-making process. Willingness by an AI vendor to be transparent and open about their AI development typically indicates they have a robust security control program embedded into their technology.

6. Keep a Human in the Loop

Above all, keep the human in the loop. AI has, excitingly, reduced workloads in healthcare organizations and provided tremendous value from an automation perspective. This value does not mean that no human oversight is required, and in fact, it’s more critical than ever to follow AI security best practice of retaining human supervision. This best practice states that, where AI is leveraged, a live individual with experience and domain expertise should be evaluating the AI tooling and automation to ensure it is reliable, accurate, and operating as intended. “Checking the homework” of AI is essential. Especially in areas where PHI is introduced into AI, this is a critical step because this data is so sensitive.

Four actionable ways to keep the human in the loop are to:

  • Check the outputs that an AI tool gives you for accuracy
  • To be cautious before entering confidential data into an LLM
  • To inform patients when you use AI software in their care and obtain their consent
  • To adopt AI in your business following an iterative approach so that rollbacks can occur if there are errors

Secure AI Use Is Crucial

AI is exciting and transformative, and adoption is key to staying competitive in the current healthcare space. As with any new technology, it is critical to keep security top of mind and follow baseline compliance best practices while you adopt AI in your business. Users of AI should start by first creating a roadmap of what they want to accomplish with AI and the business opportunity it will enable. That roadmap will then make clear what governance is needed to securely adopt AI. From there, AI users should focus on the remaining best practices, including being transparent with patients about their use of AI, vetting AI vendors, and checking the output and work of AI for accuracy and correctness.

Share this post

Stay up to date on the latest industry insights.

Subscribe for the latest blogs in your inbox.

This field is for validation purposes and should be left unchanged.

Keavy Murphy

Vice President of Security

Keavy Murphy is a Boston-based security professional currently serving as the Vice President of Security at Net Health. Passionate about cybersecurity, especially for new and emerging companies, she prioritizes using soft skills to manage compliance and risk management effectively in parallel with business objectives. Previously, she served in information security roles at Starburst Data, Cambridge Mobile Telematics, Alegeus and State Street. She enjoys writing about and researching the benefits of effective communication within the security space. Her work has been published in Dark Reading and Info Security Magazine and presented at seminars including the Chief Data and Analytics Officers Conference and FutureCon. She is an active volunteer with Boston Cares, has served in the ISACA Engage Mentor program, and holds both CIPP and CIPM certifications.