October 15, 2025

AI Security is a Party, and You’re All Invited

2025_ProofPoints_September_ResourceTile_1040x556_v1@2x

Bringing teams together to build a common language around healthcare AI

By William Dougherty

This edition covers:

  • Why threat modeling must evolve with AI innovation
  • The importance of building a common language around healthcare AI
  • Five questions to ask when building safe and secure healthcare AI

There is no doubt that artificial intelligence (AI) and large language models (LLMs) have the potential to meaningfully improve patient care while reducing costs. However, patients and providers alike are wary of these new tools as they introduce a range of new security, privacy, and regulatory risks. The good news is, these risks can be addressed by identifying them early, and analyzing system and process vulnerabilities, controls, and threats against a defined list of risks—a common cybersecurity practice called threat modeling. By threat modeling, teams can identify how to eliminate or minimize the threat, whether it is a privacy threat or a threat to clinical fidelity.

Many organizations have implemented standard threat models to cover AI capabilities, however, presenting the opportunity for not only an updated threat model, but one that unites the healthcare industry, helps us march toward a future that balances innovation with patient safety.

This year in partnership with our AI engineering team, Omada developed and released a holistic AI threat model for healthcare called PROMISE TO MAP. This model is foundational to how we design, build, and operate AI systems within our care delivery platform, and creates a common language between risk-assessing teams and engineering teams.

At Omada, we created this so we could have a cross-functional (security, IT, privacy, compliance, healthcare safety and quality, engineering and product) system to evaluate whether we were appropriately addressing the threats that might arise from the AI we are building. We’ve made it public and free to use, inviting everyone to the party, so to speak.

The model distills 18 months of learnings into a 30-page document that details what healthcare AI developers and the organizations they work with need to know about building safe, secure and compliant systems utilizing non-deterministic technology. For more context before diving in, read on to learn more about building safer AI-powered systems through an industry-spanning taxonomy.

Why Build a Common Language Around Healthcare AI?

AI and LLM systems introduce new sets of risks, controls, and vocabularies into the product development process. Early on at Omada, there was a misalignment on the meaning of terms like fine-tuning and judges. We needed to create shared definitions of terms for both systems and humans to communicate consistently and effectively.

For example, what is the meaning of the term guardrail? In basic AI engineering terms, a guardrail is simply another LLM that does an evaluation of the input or output of a system.

Say you’re worried that your chatbot might get tricked into telling an offensive joke. To protect against this, you want an offensive joke guardrail. The implementation of this would be to take the output of your chatbot LLM and send it to a guardrail LLM asking “is this text offensive?”

If the guardrail LLM says yes, then you block the output from being sent to the user. If the guardrail LLM says no, then the output is allowed to return to the user. This is just one small example of a new set of controls and a new vocabulary that teams must learn and use.

Additionally, certain terms of art are not common across the various teams that need to collaborate for a functional and scalable result. For example, the compliance team may not understand what a threat model is, or have a completely different term for it. Making sure a shared vocabulary exists across risk management organizations is also a helpful accelerator to the product development process.

Confronting Organizational and Technical Complexities

AI Risks are a Whole-Company Problem

When assessing the risks of a new AI product or service, the teams need to account not only for security or privacy risks, but also clinical efficacy, new burdens on the care team, and the impact on patient experience. A holistic approach is needed because these systems impact all facets of the company. Using our new threat model, Omada’s teams were able to identify which components needed the input of clinical experts, which needed security evaluations, which workflows needed adjustment by the care team, and which components required compliance scrutiny, or where privacy and identity management come into play.

Re-Building Processes Around Non-Determinism

Most legacy threat models expect deterministic workflows, and most risk management organizations write deterministic controls. AI systems are non-deterministic, meaning for a given input there are a range of probabilistic outputs. Controls must be developed to deal with the potential for false positives and false negatives in the output.

As an example, Omada’s nutritional education application will evaluate a photo of a member’s meal and attempt to identify the food and the macronutrients. Its accuracy depends on the quality of the photo, the portion size, whether it looks like something already in our database, and a host of other variables. The team had to account for this in how we display macronutrients to the user, and in how the user is able to edit the details of their food. We needed nutrition and AI subject matter experts to tune the prompts and the guardrails, and needed a way to verify the accuracy of the system both during development and in production.

Allowing teams to develop systems that deal with non-determinism meant identifying a rubric which measures true positives, false positives, true negatives, and false negatives-otherwise known as a Confusion Matrix. Tuning the system required the development of a System Output Matrix that helps teams balance precision (the accuracy of the system) and recall (the completeness of the system).

In this scenario, cross-functional teams were tasked with tuning the system and determining when the performance was good enough to push to production.

5 Questions to Ask When Building Safe and Secure Healthcare AI

Designing and building AI systems within healthcare is a complicated process. Having a systemic approach makes it a little easier. For example, when developing this new AI threat model, we discovered that similar risks often had similar cures and were able to establish modularized bundles of controls that we can reuse when similar risks present themselves.

If you are contemplating adding AI systems to your care delivery process, and don’t already have an approach, the following five questions are a good place to start:

  • What do you want the system to do and not do?
  • What PHI is the minimum necessary to train the system or operate it safely for patients?
  • Have your clinicians helped train the model and prompts?
  • Are you worried about creating an FDA regulated software device?
  • How are you protecting against malicious users?

 

Once you’ve addressed these questions, and are ready to dive deeper, we invite you to read the free PROMISE TO MAP threat model, and join the healthcare AI security party.