A SAM-Guided and Match-Based Semi-Supervised Segmentation Framework for Medical Imaging πŸ“

Author: Weiguo Lu, Jax Luo, Xiaoxue Qian, Hua-Chieh Shao, Guoping Xu, You Zhang πŸ‘¨β€πŸ”¬

Affiliation: Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Harvard Medical School 🌍

Abstract:

Purpose:
Semi-supervised segmentation leverages sparse annotation information to learn rich representations from combined labeled and label-less data for segmentation tasks. This study leverages the foundational model, segment anything model (SAM), to assist the unsupervised learning part of semi-supervised, Match-based segmentation frameworks. Trained with an extremely large dataset, SAM-based methods generalize better than traditional models to various imaging domains/tasks, allowing it to assist Match-based frameworks to improve the quality of intermediate pseudo-labels for unsupervised learning.
Methods:
We propose SAMatch, a SAM-guided, Match-based framework for semi-supervised medical image segmentation. SAMatch involves two main steps: first, we use pre-trained Match-based models to extract high-confidence predictions for prompt generation from unlabeled samples. The Match-based framework is a teacher-student framework based on data-level differential augmentations. Second, the generated prompts and unlabeled input images of step one are input into a fine-tuned SAM-based method to produce high-quality masks as pseudo-labels. These refined pseudo-labels are further fed back to train the Match-based framework by enforcing an unsupervised consistency loss between the segmentation outputs of student and teacher models. SAMatch was trained in an end-to-end fashion, facilitating interactions between the SAM- and Match-based models. We evaluated SAMatch on multiple datasets, including ACDC (cardiac MRI segmentation), BUSI (breast ultrasound lesion segmentation), and an in-house liver MRI segmentation dataset (MRLiver).
Results:
SAMatch demonstrates robust performance across all datasets. On ACDC, with only three labeled cases for semi-supervised learning, we achieved an averageΒ±s.d. Dice score of 89.36Β±0.06% on 20 test cases. For BUSI, using just 30 labeled samples, the corresponding Dice was 77.76Β±0.06% for 170 test samples. On MRLiver, three labeled training cases yielded an average Dice of 80.04Β±0.11% on 12 test scans.
Conclusion:
SAMatch demonstrates strong potential for semi-supervised segmentation by addressing challenges in automatic prompt generation and pseudo-label refinement. It is particularly effective in label-scarce scenarios for annotation-efficient learning.

Back to List