S027 - Generation of Multi-modal Brain Tumor MRIs with Disentangled Latent Diffusion Model

Yoonho Na, Kyuri Kim, Sung-Joon Ye, Hwiyoung Kim, Jimin Lee

Show abstract - PDF - Reviews

Deep-learning based image generation methods have been widely used to overcome data deficiency. The same is true also as in medical field, where data shortage problem is frequent. In this study, we propose multi-modal brain tumor Magnetic Resonance Imaging (MRI) generation framework, called Disentangled Latent Diffusion Model (DLDM) to tackle data deficiency in medical imaging. We train an autoencoder that disentangles the feature of multi-modal MR images into modality-sharing and modality-specific representations. By utilizing the feature disentanglement learned from the autoencoder, we were able to train a diffusion model that can generate modality-sharing and modality-specific latent vector. We evaluate our approach with clean-FID and improved precision \& recall. The results were compared with GAN-based model, StyleGAN2.
Hide abstract


Short paper

Schedule: Wednesday, July 12: Virtual poster session - 8:00–9:00
Poster location: Virtual only