View

Inclusive Health Care

by Deirdre Kelly

Big data and new machine-learning systems are transforming health care by cutting wait times and streamlining the gathering of information – so that cases requiring life-saving interventions can be identified and prioritized. Artificial intelligence (AI) is also accelerating the development of new therapies and therapeutics for drug research, while also enhancing diagnostics and the practice of preventive medicine. But there is a down side.  

Research carried out by Laleh Seyyed-Kalantari at the Lassonde School of Engineering (and published in the journal Nature Medicine) has found that there are biases embedded in AI algorithms, leading to wrong diagnoses or incorrect treatments for certain socio-demographic groups, particularly marginalized or minority populations.

It’s not a technical issue; it’s the result of social and cultural factors. Machine-learning algorithms are only as unbiased as the humans programming them, says Seyyed-Kalantari, a professor in the Department of Electrical Engineering and Computer Science who studies inequities in AI.

“Such biases are especially troubling in the context of under-diagnosis, whereby the AI algorithm would inaccurately label an individual with a disease as healthy, potentially delaying access to care,” she says. 

Seyyed-Kalantari’s latest research into algorithmic bias – published in The Lancet Digital Health – examined algorithmic under-diagnosis in chest X-ray pathology classification across three large chest X-ray datasets, as well as one multi-source dataset. The research found that X-ray analysts consistently under-diagnosed underserved patient populations and that the under-diagnosis rate was higher for intersectional subpopulations, such as female patients, Black patients, and patients of low socioeconomic status. 

“Deployment of AI systems using medical imaging for disease diagnosis with such biases risks exacerbation of existing care biases and can potentially lead to unequal access to medical treatment, thereby raising ethical concerns for the use of these models in the clinic,” she concludes. 

How to mitigate the bias? 

AI systems need to be better designed to reflect diversity in socioeconomic and health-care settings to avoid health disparities, Seyyed-Kalantari says. Until that happens, she advises health-care workers to proceed with caution, suggesting they process their data in advance of feeding it to AI and filter the system’s results when they appear in accordance with fairness definitions introduced into the AI training process itself. “Addressing this concern is essential,” she adds, “so that the benefits of health-care-related AI are not realized at the expense of sustaining or increasing discrimination against marginalized groups.” ■

Visit Link

Up Next

Slice and the City

A York grad dishes up a tale of two pizza-loving towns

Read More