This edition covers:
Artificial Intelligence (AI), one of the most transformative technological innovations in recent years, is rapidly reshaping how health systems and providers design and deliver care. As virtual care becomes more ubiquitous, AI is likely to play a pivotal role in enhancing existing solutions and exploring new frontiers for improvement.
Already, AI solutions have demonstrated significant value across a range of healthcare domains, including disease diagnosis, drug discovery, robotic-assisted surgery and the automation of administrative tasks, all contributing to cost reduction and improved patient outcomes. For instance, emerging data have highlighted AI’s success in early-stage cancer detection, predicting treatment responses, and enabling population health management through predictive analytics for patients at risk of chronic diseases.
However, despite its promising impact, the integration of AI into healthcare also raises important concerns about bias and inequity, especially when it comes to race, ethnicity, age, sex and socioeconomic status. Without deliberate action, these technologies risk reinforcing or exacerbating existing disparities in care for diverse populations.
AI bias occurs when AI makes decisions that are systematically unfair to certain groups of people. In healthcare, this can lead to inequitable allocation of resources and/or poor health outcomes for these groups. The very first step to mitigating bias is understanding how bias gets introduced into AI algorithms or the underlying data itself, including how they occur throughout the AI lifecycle. This section examines key types of bias and their implications in healthcare.
Mitigating AI bias is essential given the rate of AI adoption in healthcare with over 67% of healthcare practices stating they are fully using or experimenting with AI. However, a recent survey of US adults indicated 66% of people have low trust in their healthcare system’s responsible use of AI, with 58% expressing concern about potential harm.
Other concerns contributing to distrust in AI include threats to patient choice, potential increases in healthcare costs and data security. Healthcare providers have also expressed a need for transparency to aid their trust in AI and machine learning (ML) devices. To reach their full potential, these tools require health systems to implement oversight mechanisms, promote stakeholder engagement, and provide clear communication in how data and AI are used.
To bend the curve on chronic diseases, we need to build AI systems that work for everyone, especially the communities most affected by these conditions. Chronic conditions disproportionately impact underrepresented and economically disadvantaged groups, yet these same communities often continue to face the greatest barriers to accessing and benefiting from new technologies.
For AI to truly help bend the curve, it must not only be available, but also usable, accessible, and relevant to those who need it the most. Here are some key steps we can take to make AI more inclusive:
Consider Cultural Context: Cultural diversity plays a significant role in healthcare, impacting patients’ beliefs and behaviors. Respecting cultural context and diversity means building AI tools grounded in cultural sensitivity, competence and responsiveness. For instance, Omada’s AI-powered nutrition education chat bot is trained on over 3 million foods from across 150 countries, sourced from OpenFoodFacts.org. This enables the education and ideas it supplies to reflect local eating habits and culinary traditions—for example, being able to understand the difference between a dish from a North African tradition made with couscous and one from Asia made with rice.
Reading Level Considerations: With 54% of American adults reading below a sixth grade level, it is critical that AI tools must utilize patient education materials that are easy to understand. To support individuals with varying literacy levels, these materials should follow the recommended reading standards tailored to their intended audience: fifth grade by the Joint Commission, sixth grade by the American Medical Association, and no higher than eighth grade by the National Institutes of Health.
Include Diverse Voices: From design to user testing and implementation, representation matters. Involving people of different races and ethnicities, income, and educational levels will help identity issues early on and build trust with the communities the tools are intended to serve.
Additionally, voice technology diversity plays a critical role in inclusivity. With 42.2% of the U.S. population identifying as non-white, it is vital that the voices used in healthcare interfaces reflect this diversity. Health systems should prioritize racial and cultural representation in voice user interfaces to foster trust and inclusion, making AI tools more accessible and effective for all.
Ongoing Monitoring and Adaptation: The journey does not end with deployment of AI; continuous auditing and adaptation are essential. Building systems to monitor bias and disparities while soliciting and acting on ongoing feedback is critical for maintaining fairness, trustworthiness, and accountability in AI use.
Health equity requires continuous action, and mitigating AI bias is no exception. When designed and deployed responsibly, AI can help us get to a more equitable healthcare system. As AI-supported healthcare evolves, systems have the opportunity to address areas of concern for patients including transparent communication, privacy and trustworthiness. Beyond implementation, proper protocols and policies must be in place for ongoing monitoring of equity considerations. This begins with a commitment to building trust and engaging with diverse communities to ensure AI-supported healthcare benefits everyone.
At Omada, we are committed to providing human-led care supported by AI. Our cross-functional team, spanning legal, compliance, quality improvement, and health equity and applied AI, strives to identify and address AI bias. By developing guardrails for design and implementation and ensuring diverse voices in user testing, we aim to deliver AI solutions that benefit all members without exposing them to unnecessary risks.
To learn more about AI at Omada, visit our Product Innovation blog.