P222 - A comparison of self-supervised pretraining approaches for predicting disease risk from chest radiograph images

Yanru Chen, Michael T Lu, Vineet K Raghu

Show abstract - PDF - Reviews

Deep learning is the state-of-the-art for medical imaging tasks, but requires large, labeled datasets. For risk prediction, large datasets are rare since they require both imaging and follow-up (e.g., diagnosis codes). However, the release of publicly available imaging data with diagnostic labels presents an opportunity for self and semi-supervised approaches to improve label efficiency for risk prediction. Though several studies have compared self-supervised approaches in natural image classification, object detection, and medical image interpretation, there is limited data on which approaches learn robust representations for risk prediction. We present a comparison of semi- and self-supervised learning to predict mortality risk using chest x-ray images. We find that a semi-supervised autoencoder outperforms contrastive and transfer learning in internal and external validation.
Hide abstract


Poster presentation

Schedule: Wednesday, July 12: Posters — 10:15–12:00 & 15:00–16:00
Poster location: W55