Bias and the silver bullet: Gender partiality seen within AI algorithms for AAA identification

1990
AI
Sharon Kiang

Artificial intelligence (AI) has become a central focus within various spheres, from commerce to medicine—including applications in patient care. Sharon Kiang, MD, talks to Vascular Specialist about the specter of bias and how a “hybrid” approach between AI and human specialty may be the best solution going forward.

Sharon Kiang, MD, considers it the “silver bullet” clinical perception of AI. In the Loma Linda University School of Med­icine associate professor’s reckoning, it’s no mere nebulous metaphor but the underpin­nings of her latest research into disease pre­diction models, and the gendered biases they found within their foundational algorithms.

“Originally, we weren’t looking at bias. We were just looking to see if the aortic model, using machine learning and AI algorithms, could predict and understand the progres­sion of disease and, if possible, therefore pre­dict outcomes,” Kiang tells Vascular Specialist. Operating on the assertion that human error and a “human-bound limitation of resourc­es” confound patient care today, Kiang’s team believe that AI could alleviate a portion of manual “yearly factors,” which can take up significant time.

Kiang has not long since finished present­ing some of her latest findings at the 2023 Women’s Vascular Summit (April 28–29) in Buffalo, New York, where she extrapolated on the advent of AI’s more frequent deploy­ment, and the greater ease with which its limitations can be seen. “We started reading about how bias is unintentionally put into algorithms, and so went to look at our own data to see if there are biases there,” she says. Kiang selected gender as their specified bias variable, in part because disparities within post-intervention clinical outcomes in fe­male aneurysm patients have already been reported. “We wanted to see if our machine was able to fix that and equalize its ability to learn,” Kiang explains.

Kiang told the Women’s Vascular Summit about how she and col­leagues investigated gendered bias in convolutional neural networks, or deep learning, for abdominal aor­tic aneurysm (AAA). Their findings revealed that the machine was un­able to overcome this bias and could not identify aneurysms in females as it could in males. “This may be that female aortas are smaller,” Kiang notes, so limiting the ma­chine’s accuracy. Offering further explana­tion, she stated that their data may have also been “flawed,” creating bias in the algorithm, or that it may not accurately represent “the question being asked of the machine.”

Although she hypothesized that a perfect algorithm could theoretically be designed— blinded to race, gender, financial, and geo­graphical disparities, so in essence a “blinded unicorn algorithm”—Kiang states that ques­tions arise about whether this would make it a “useful” algorithm at all. Where the “bal­ance” lies, she continues, is in a realistic AI architecture that can support these variables in a “representative” and “fair” way.

But, time is of the essence. “AI moves fast” Kiang asserts, observing that her growing body of research is developing closely along­side that of organizations with predomi­nantly financial motives, who are keen to “keep pushing.” Apart from a brief diversion to comment on the novel and rapid devel­opment of ChatGPT—“Dude, we haven’t even discussed the ethical implications of [ChatGPT], right?”—Kiang’s viewpoint remains focused on the positive change AI can effect in the care of humans globally. “It sounds Ivory Tower-ish, but my mind is cen­tered on the benefit to populations—not just at a specialty level, or even at a hospital level, but at a national, global level,” Kiang says.

AI does not come without limitations, Ki­ang makes clear; however, apprehension has most commonly been “driven by fear.” “On a superficial level people suggest that they will be out of a job, and I be­lieve that’s unfounded,” she says, high­lighting the only legitimate fear might be for support staff whose job could be performed more efficiently by AI. “I’m not here to say we should not advance technology so that people can be less efficient in the care that they provide pa­tients. But neither am I saying we should build Skynet and drive the human race out of need,” she adds.

A “hybrid” approach between AI and human specialty may be the best solution, Kiang proposes, a “human understanding of relationships” filling the gaps where AI is most lacking. But a “critical mass” of peo­ple who believe in the positive function of AI is needed, she continues. “If you don’t have that buy-in, you will have a lopsided relationship that isn’t truly a hybrid model. In this case, you will have executive bean-count­ers who don’t know anything about boots-to-the-ground patient care, leaning on AI predictions. I can see that as a problem mov­ing forward.”

Kiang describes her experience of the complex contemporary reception to AI, de­tailing how audiences are becoming more “receptive” to her ongoing AI research com­pared with previous years.

“We are hitting while the iron is hot right now, and it’s very hot,” she says. Kiang be­lieves people are now more “open-minded, or are being forced to be.” Emphasizing the importance of “clean” messaging when pre­senting research on this technology, Kiang says she aims to continue demonstrating the realistic clinical applications of AI that have the potential to better patient care in the future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here