Author: Zachary Buchwald, Chih-Wei Chang, Zach Eidex, Richard L.J. Qiu, Mojtaba Safari, Shansong Wang, Xiaofeng Yang, David Yu 👨🔬
Affiliation: Emory University and Winship Cancer Institute, Emory University, Department of Radiation Oncology and Winship Cancer Institute, Emory University 🌍
Purpose: MRI offers excellent soft tissue contrast for diagnosis and treatment but suffers from long acquisition times, causing patient discomfort and motion artifacts. To accelerate MRI, supervised deep-learning (DL) methods have been developed. However, obtaining paired under-sampled and fully-sampled datasets for training remains challenging. To address this, we propose a self-supervised DL approach leveraging adversarial diffusion to effectively reconstruct high-resolution MRI from under-sampled data.
Methods: We used the fastMRI multi-coil brain T2-weighted dataset with 1,376 cases, allocating 80% for training and validation, and 20% for testing. To assess robustness to domain shifts, we evaluated two out-of-distribution datasets: multi-coil brain T1-weighted and T1-c images, totaling 100 cases. Data were under-sampled at acceleration rates of R=2×, 4×, and 8×. The data were split into two non-overlapping sets, S1 and S2, with S1 used to train the model on unseen S2 data. Our method was compared quantitatively and qualitatively against two state-of-art methods (ReconFormer and SS-MRI) using metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Normalized Mean Squared Error (NMSE). Statistical analyses were performed at a significant level of 0.05. Results are presented as averages with 95% confidence intervals.
Results: Our method outperformed all acceleration rates. For example, at R=4×, it achieved an NMSE of 1.26 (1.18–1.35), a PSNR of 35.44dB (35.31–35.58), and an SSIM of 95.55% (95.38-95.69). At R=8×, the method maintained high-quality reconstructions with an NMSE of 3.26 (3.08-3.51), a PSNR of 31.67dB (31.55–31.79), and an SSIM of 91.67% (91.45–91.89). Robustness testing with out-of-distribution datasets demonstrated significant improvements (p<<0.001) in all metrics compared to zero-filled and other comparative models, indicating better voxel-wise similarity and reduced spatial distortions.
Conclusion: The self-supervised adversarial diffusion framework reconstructs high-resolution images with under-sampled data, potentially reducing scan times while preserving image quality, thereby enhancing MRI accessibility and utility.