Author: Mingli Chen, Xuejun Gu, Mahdieh Kazemimoghadam, Weiguo Lu, Qingying Wang 👨🔬
Affiliation: Medical Artificial Intelligence and Automation (MAIA) Lab, Department of Radiation Oncology, UT Southwestern Medical Center, Department of Radiation Oncology, Stanford University School of Medicine 🌍
Purpose: This study introduces a novel template-guided deep learning framework for primary gross tumor volume (GTVp) segmentation, addressing challenges posed by diverse tumor types and enabling a universal model for accurate and adaptable segmentation across various scenarios.
Methods: The model, based on a 3D nnUNet architecture, incorporates three input channels—primary CT image, template CT image, and template mask—enabling a “Follow-the-Leader” learning approach that adapts segmentations to the selected template (leader). Validated using the RADCURE dataset of 3346 head and neck (H&N) cancer patients, the framework addressed 91 distinct categories derived from combinations of patient sex, tumor laterality, main site, subsite, and tumor stage. Representative templates were selected from each category, and the model was trained using primary CT images alongside corresponding templates. Data allocation included up to 10 cases per category for training and validation, with additional cases reserved for testing.
Results: Compared to conventional 3D nnU-Net model for CT-based GTVp segmentation, the proposed Fellow-the-Leader approach significantly improved DSC, ASD, and HD95 across all H&N tumor sites. For larynx (269 cases), DSC for T3 increased from 0.24 to 0.61, ASD from 9.00 mm to 2.42 mm, and HD95 from 20.47 mm to 6.80 mm. For hypopharynx (14 cases), T2 DSC improved from 0.34 to 0.58, ASD from 9.31 mm to 2.60 mm, and HD95 from 14.42 mm to 8.12 mm. In oropharynx (261 cases), the model outperformed the baseline for lower-stage tumors (T1 and T2) across all categories, the proposed model improved DSC for T1 (0.61→0.75), ASD (3.15 mm→2.50 mm), and HD95 (10.15 mm→6.85 mm) and for T2, DSC (0.65→0.78), ASD (3.42 mm→2.64 mm), and HD95 (9.74 mm→7.20 mm).
Conclusion: The novel template-guided deep learning framework effectively adapts to diverse tumor types by tailoring segmentation to selected templates, enhancing delineation accuracy, as demonstrated with a comprehensive H&N cancer dataset.