AI in Healthcare Equity

What is AI in Healthcare Equity?

Artificial intelligence (AI) in healthcare equity refers to the ways in which AI systems and algorithms that support medical decisions affect fairness in healthcare. As hospitals and doctors increasingly use AI to diagnose diseases, suggest treatments, and decide how to spend resources, new problems emerge. Sometimes, these AI tools work better for certain patient groups than others, creating unintended bias and differences in treatment. Understanding these issues matters, as artificial intelligence in healthcare is becoming more and more widespread.

The Basic Idea

Maria schedules her annual mammogram at the same clinic she's visited for eight years. When she arrives for her examination, the technician mentions that they now use an AI system to help radiologists detect abnormalities earlier. "It's really advanced," he explains. "Catches things we might miss."

Three days later, Maria receives a callback requesting additional imaging. The AI flagged something suspicious. During her follow-up appointment, the radiologist shows her the scan and explains that the AI identified an area of concern that led to this precautionary step.

What Maria doesn't know is that the computer learned to spot problems by looking at thousands of mammograms from other women. But most of those women had lighter skin and different breast tissue patterns. For women like Maria, whose characteristics differ from the majority of data used to train the algorithm, the computer gets confused more often and sees problems that aren't really there. This means more scary phone calls and extra appointments that she doesn't actually need.

Meanwhile, across town, Robert sits in his doctor’s office as she looks at computer-generated suggestions for his diabetes treatment. The program recommends specific medications based on Robert's test results and medical history, and the doctor has learned to trust these suggestions because they seem thorough and scientific.

However, the algorithm is based on patients who took their medications regularly, never missed appointments, and had stable living situations. Robert sometimes skips his medicine when money is tight, and has missed appointments because he can't take time off work. Because the doctor relies too heavily on an algorithm that doesn’t account for these factors, she is unaware that the suggestions may not be suitable for Robert’s situation. 

In both cases, computer systems designed to help people get better healthcare accidentally make things worse. Maria gets scared and has to come back for tests she doesn't need. Robert gets treatment advice that isn’t tailored to his real-life situation.

These stories illustrate how medical programs designed to improve patient care can accidentally make healthcare unequal. The technology itself isn't good or bad. What matters is the information it learned from, where it is applied, and how doctors and patients respond to it—all of which determine whether it benefits everyone equally or makes existing problems worse.

People’s trust in medical technology varies, often because of past experiences with healthcare.1 Some communities have good reason to be cautious, while others welcome new tools. Doctors also react differently to computer suggestions, and those reactions can shape whether the technology helps or harms. The challenge is bigger than fixing computer code. To make patient care more equitable, we need to look closely at each potential impact of AI.

“Bias is a human problem. When we talk about 'bias in AI,' we must remember that computers learn from us."


— Michael Choma, American physician and AI engineer2

About the Author

Joy VerPlanck

Dr. VerPlanck brings over two decades of experience helping teams learn and lead in high-stakes environments. With a background in instructional design and behavioral science, she develops practical solutions at the intersection of people and technology. Joy holds a Doctorate in Educational Technology and a Master of Science in Organizational Leadership, and often writes about cognitive load and creativity as levers to enhance performance. 

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?