August 27, 2025

Opinion: 5 Ways Payers Can Embrace AI to Promote Population Health

2025_ProofPoints_August_ResourceTile_1040x556_v2

by Michael Alexander

This edition covers:

  • Progress over perfection on the path to interoperability
  • Harnessing AI to make small fixes with big impact
  • Envisioning a compliant, system-wide LLM for population health

 

In the mid‑1940s, a young Bell Labs mathematician named Claude  Shannon noticed something peculiar about the telephone system. It worked—beautifully, most of the time—yet it was forever haunted by static, cross‑talk, and the errant hum of distant wires.

But instead of replacing every mile of copper to remove these inconveniences and distractions, Shannon proposed something radical: embrace the noise. Encode each message so cleverly that the line’s imperfections no longer mattered. What followed was the Noisy Channel-Coding Theorem, a concept built on the premise that clear communication can still be achieved, even in the presence of noise.

Healthcare payers might read that story with a deep pang of recognition. I know from years working on the payer side that their universe is a noisy labyrinth of old adjudication engines, provider portals bolted on like spare parts, and data silos so tall they cast shadows across entire organizations—a phenomenon often summed up as technical debt, which we’ll dig into later. The problem is so universal that CVS recently invested $20 million to fix it over the next decade, with a goal of reaching seamless interoperability.

Everyone talks about “interoperability”—that shining, static-free horizon where claims, lab results, and pharmacy refills flow unimpeded by data silos and complex regulations.

The goal of interoperability is to remove barriers to communication between the infinite facets of the healthcare system, to ultimately improve the health of the entire American population.

What if the Shannon theorem can help us get there? What if the path to improved population health isn't through perfect interoperability, but through AI technology that can extract meaningful signals from the industry’s enormous, messy and imperfect data ecosystem? Let’s dive into what this could look like.

The Dream of Interoperability and The Tyranny of Tech Debt: A Tale as Old as Time

Interoperability has become the healthcare equivalent of the flying car—perpetually “10 years away.” We fund pilots, attend conferences, and publish roadmaps. Yet ask any payer CTO how many of their 40‑year‑old core systems will be retired by decade’s end and the answer is, likely, not a one. The result is what software engineers call technical debt: yesterday’s rushed shortcuts that compound, like unpaid interest, into today’s paralysis.

Even the best MacGyver-ing can’t get all these systems to talk to each other. In practice, forcing dated infrastructure to speak a common tongue by adding translation tools often slows progress more than accelerates it, because every translation layer itself adds another layer of debt—and it's nobody's fault.

Beyond tech debt, there are myriad reasons behind this impossible feat, spanning from data silos and budget constraints all the way to government regulations.

Harnessing AI with Clarity of Purpose

Which brings us to artificial intelligence (AI). Most payers already have the raw materials: mountains of claims, oceans of utilization data. They do not have to wait for a single, grand unification of systems to convert that informational raw ore into diamonds of insight. They merely have to ask: Where, today, does a single percentage‑point of efficiency matter most?

A timely example of this is prior authorization —a much-maligned process that consumes clinician hours as well as patient goodwill. Last month, HHS Assistant Secretary for Technology Policy updated its regulations on using certified EHR technology to improve care delivery in a number of ways, including the digitization of prior authorization. This is indeed a move in the right direction; a single percentage point of efficiency, if you will.  Prior authorization is necessary—albeit controversial—because it controls healthcare costs, keeping premiums down for all parties. However, there is a lot of room for improvement, and AI may be the near-perfect tool for the job.

I say near-perfect due to ongoing state legislation focused on keeping human physicians as the decision-makers in healthcare, not the other way around.

I’m confident we all agree that humans at the helm of healthcare is ideal, and these regulations are vital to the future of population health. With that in mind, how might AI help physicians make prior authorization decisions more quickly and accurately?

One way would be to feed historic approval patterns into a machine‑learning AI model, which would analyze vast amounts of data to generate accurate and swift suggestions to inform provider decisions. This automation would slash turnaround times by streamlining the approval process and ensuring approvals and detections occur in real-time, assuming the approval patterns were fair and unbiased. This scenario would significantly enhance operational efficiency and free up clinicians to provide actual care, instead of the administrative tasks.

AI could also take on fraud detection: instead of static rules (“deny any claim over $5,000 from zip code X”), an AI can spot subtle anomalies humans miss—like the dermatologist in Miami who suddenly begins billing like a cardiothoracic surgeon. In fact, CMS just announced a challenge on this very topic. Sounds like magic, doesn’t it?

The magic, though, is not in the algorithm - it’s in the clarity of purpose.

Why adopt AI? To increase operational efficiency for human decision makers. How to deploy it? In pockets where the payoff is measurable. Master that narrow domain, build confidence, then—only then—widen the lens.

The Future: An LLM that Puts Population Health First

If we zoom forward a decade, we glimpse an architecture that reconciles today’s mess with tomorrow’s aspirations: a large language model at each payer that optimizes population health among its subscribers.

Imagine a health plan care manager asking, in plain English: Show me how many people in my population have uncontrolled diabetes who would benefit from a digital diabetes management solution.

With privacy as its first priority, the HIPAA-compliant LLM sifts through the noisy channels of unidentified structured claims, unstructured physician notes, social determinants of health feeds, even wearable data, and answers in seconds.

Once the LLM provides the number of people (preserving patient or member privacy in its report-out by leaving PHI within the relevant data stores without exposing identity unnecessarily) the health plan manager can then use that information to determine how to best scale a digital diabetes management solution to improve the holistic health of said population. In essence, AI does all the heavy lifting, allowing the health plan manager to make strategic decisions for their member population.

Call it the Population Health LLM— all‑encompassing and secure within each payer’s four walls. Data never leaves the moat; encryption guards every transit. Yet the model speaks HL7, ICD‑10, and plain‑language queries with equal fluency.

Will this single leap solve interoperability and move the needle on population health? In my opinion, yes. Because once data lives in a model that understands context, the underlying systems no longer need to. Akin to Shannon’s Noisy Channel Coding Theorem, they can remain archaic, so long as their data flows into the ecosystem and then into the LLM.

If this sounds like a lot of data collection, you’re right, it is.

Whether you’re a payer, employer or provider reading this, we’re all patients in the end.  And with that in mind, I can hear the collective groan: “You want health plans to gather and collate more of my personal data? No thank you.”

I get it. And my response is simple: as technology advances, our unidentified health data will continue to be collected. Now is the time to make sure it’s done with the aim of making healthcare easier, better and more innovative for all of us, and meets the high standards of privacy, security, safety and clinical fidelity we expect in healthcare data use.

How Payers Can Move Toward This Vision

We finally reached the part you came here for—a list of steps payers can take toward embracing AI to promote healthier populations. My hope is that you will take these ideas (that are my own, and don’t necessarily reflect the opinion of my employer) and apply them to your organization’s tech optimization journey, knowing that progress can lead to perfection, as long as we embrace the bumps (and modularity) along the way.

Or if you don’t buy that, how about similar wisdom from Dory the blue tang fish: Just keep swimming.

For payers, that means:

1. Acknowledging the tyranny of tech debt: Name it, map it, but don’t be hostage to it.

2. Maximizing population health impact in the near term: Focus on those single percentage points of efficiency that deliver immediate wins for high-risk member populations.

3. Architecting for swap-ability: Whether you build in-house or outsource, insist on interfaces that allow tomorrow’s breakthrough to slide effortlessly into today’s slot, but (especially if you buy) keep your member data private and secure.

4. Cultivating population-focused data discipline:  Every AI project is also a population health data project. Clean, consistent member population datasets become the foundation for increasingly sophisticated health outcome predictions and interventions.

5. Letting the LLM emerge, not erupt: One day you'll realize your organization has assembled a comprehensive population health AI system, built incrementally from those well-governed datasets you've been carefully maintaining and connecting.

Claude Shannon conquered static not by rebuilding the telephone network, but by reframing the problem. He made peace with imperfection and, in doing so, rendered it irrelevant. Healthcare payers stand at a similar junction.

Tech debt might be insurmountable and interoperability impossible—for now. But AI, wielded with purpose and patience, with humans in the driver’s seat, allows us to route around the noise, to extract signal from chaos, and, step by deliberate step, build a system that finally lives up to its promise.

The journey need not be heroic. It need only be persistent, modular, and—above all—willing to let a little blue fish lead the way.