S021 - Exploring the Role of Explainability for Uncovering Bias in Deep Learning-based Medical Image Analysis
Emma Stanley, Matthias Wilms, Pauline Mouches, Nils Forkert
Show abstract - PDF - Reviews
Fairness and bias are critical considerations for the effective and ethical use of deep learning models for medical image analysis. Despite this, there has been minimal research on how explainable artificial intelligence (XAI) methods can be leveraged to better understand underlying causes of bias in medical image data. To study this, we trained a convolutional neural network on brain magnetic resonance imaging (MRI) data of 4547 adolescents to predict biological sex. Performance disparities between White and Black racial subgroups were analyzed, and average saliency maps were generated for each subgroup based on sex and race. The model showed significantly higher performance in correctly classifying White males compared to Black males, and slightly higher performance for Black females compared to White females. Saliency maps indicated subgroup-specific differences in brain regions associated with pubertal development, an established confounder in this task, which is also associated with race. These findings suggest that models demonstrating performance disparities can also lead to varying XAI outcomes across subgroups, offering insights into potential sources of bias in medical image data.
Hide abstract
Short paper
Schedule: Monday, July 10: Posters — 11:00–12:00 & 15:00–16:00
Poster location: M37