Advertisement
Advertisement

The algorithm will see you now: Why AI is making healthcare less fair, not more

BY NCHEBE-JAH ILOANUSI

Last month, I watched a resident dismiss a concerning symptom in a black patient partly because the hospital’s risk prediction system scored him as “low priority”. The algorithm had spoken. What that residents didn’t know—what most of us don’t know—is that these systems often work worse for the patients who need them most.

A new analysis of 45 studies reveals something troubling: artificial intelligence in healthcare isn’t the great equalizer we hoped for. Instead, it’s a bias amplifier. When researchers tested a widely used algorithm that helps hospitals decide which patients need extra care, they found it systematically underestimated risk for Black patients. To get the same level of attention as white patients, Black patients had to be significantly sicker.

This isn’t an isolated glitch. Skin cancer detection algorithms perform worse on darker skin. Sepsis prediction tools are less accurate for women. Kidney function tests built into electronic health records consistently overestimate function in Black patients, potentially delaying referrals for transplants. The pattern is clear and consistent.

Advertisement

The root problem is simple: these systems learn from our data, and our data reflects our biases. Medical datasets have historically excluded minorities from research. Electronic health records often lack basic demographic information. When race or ethnicity data exists, it’s frequently incomplete or inconsistent. We’re training machines on the same skewed information that created disparities in the first place.

But here’s what makes this crisis different from past medical inequities: speed and scale. A biased human doctor can harm dozens of patients over a career. A biased algorithm deployed across a health system can harm thousands of people in a single day. We’re not just perpetuating bias, we’re industrializing it.

The solutions being proposed fall into two camps, and frankly, one isn’t working. The technical camp focuses on adjusting algorithms, reweighting data, and post-processing outputs. These approaches treat bias like a software bug to be patched. The results were mixed at best.

Advertisement

The second approach recognizes that this isn’t just a computer science problem, it’s a social justice problem. It demands something radical for healthcare AI: involving the communities most affected in designing these systems from the ground up. This means patients, community advocates, and social workers sitting alongside data scientists and physicians when we build these tools.

Some dismiss this as impractical idealism. I call it basic quality assurance. Would we deploy a drug without testing it in diverse populations? Would we use a diagnostic tool that worked poorly in women? We shouldn’t deploy AI systems that perform worse for the patients who already receive substandard care.

The medical establishment needs to act now, before these biased systems become more entrenched. First, no healthcare AI should be deployed without bias testing across demographic groups. Period. Second, we need diverse teams building these tools—not just diverse data, but diverse perspectives in the room where decisions get made. Third, we must include affected communities in governance structures for healthcare AI.

Some institutions are already moving. A few health systems now audit their algorithms for bias. Some medical schools are teaching students to question AI recommendations. But these efforts remain scattered and voluntary.

Advertisement

The stakes are too high for voluntary action. Medical boards certify doctors. The FDA approves medical devices. We need similar oversight for healthcare AI. The alternative is a two-tiered system where algorithms give better care to those who already have advantages.

Twenty years ago, evidence-based medicine transformed healthcare by demanding rigorous proof that treatments actually work. Today, we need equity-based AI—systems that we can prove work fairly across all populations they serve. The technology exists. The methods exist. What we need now is the will to use them.

The algorithm will see you now. The question is: will it see you clearly?

Nchebe-Jah Raymond Iloanusi is an assistant professor in the Biology Department at the College of Staten Island, CUNY

Advertisement


Views expressed by contributors are strictly personal and not of TheCable.

error: Content is protected from copying.