Author: Yunxiang Li, Weiguo Lu, Xiaoxue Qian, Hua-Chieh Shao, You Zhang ๐จโ๐ฌ
Affiliation: Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center ๐
Purpose:
Curating high-quality, labeled data for medical image segmentation can be challenging and costly, considering the existence of various image domains with differing modalities/protocols. Cross-domain unsupervised learning, by leveraging data from a labeled source domain, enables the learning of a segmentation model on an unlabeled target domain to drop the labeling requirements. We developed an uncertainty-guided cross-domain adaptation framework (UGCDA), which introduces uncertainty awareness into cross-domain adaptation to align model predictions in source and target domains and improve the accuracy of unsupervised image segmentation.
Methods:
UGCDA used a dual-branch network with histogram matching (HM) to mitigate prediction uncertainties and enhance the target domain segmentation accuracy. Specifically, during UGCDA training, a source-domain image was histogram-matched to an unpaired target-domain image through random sampling, yielding a new HM image. The source-domain image was input into a Source-Net branch for segmentation, while the HM image was fed into a parallel Target-Net branch, producing two segmentation predictions. For each segmentation, the uncertainty map was also estimated through Shannon Entropy. The predictions were then weighted by the uncertainty maps, and the KullbackโLeibler (KL) divergence between them was used as an uncertainty-weighted alignment loss to enforce consistency between the two predictions. Using exponential moving average updates and uncertainty alignment, the framework iteratively optimized target-domain predictions for anatomically consistent and plausible segmentations.
Results: Evaluated on an abdominal CT and MRI dataset for cross-domain segmentation adaptation (CT-to-MRI or MRI-to-CT), UGCDA showed the highest segmentation accuracy among multiple state-of-the-art methods, achieving a mean(ยฑs.d.) Dice coefficient (DSC) of 92.57ยฑ1.07% and a 95th Hausdorff distance (HD95) of 2.96ยฑ1.27 mm for CT-to-MRI adaptation, and 87.80ยฑ2.64% DSC and 1.70ยฑ0.26 mm HD95 for MRI-to-CT adaptation in multi-organ segmentation.
Conclusion:
UGCDA effectively reduces the domain gap in unsupervised medical image segmentation, achieving exceptional accuracy through uncertainty-guided cross-domain alignment to promote label- and data-efficient learning.