Author: Petr Bruza, Yao Chen, David J. Gladstone, Lesley A Jarvis, Brian W Pogue, Kimberley S Samkoe, Yucheng Tang, Shiru Wang, Rongxiao Zhang 👨🔬
Affiliation: NVIDIA Corp, Dartmouth College, Thayer School of Engineering, Dartmouth College, Dartmouth Cancer Center, University of Missouri, University of Wisconsin - Madison 🌍
Purpose: Cherenkov imaging provides real-time visualization of megavoltage radiation beam delivery during radiotherapy. Patient-specific bio-morphological features, such as vasculature, captured in these images, can be utilized for precise positioning and motion management. However, prior approaches lacked efficient biological feature-based tracking due to the slow speed and low accuracy of conventional image processing for real-time feature segmentation. This study developed a deep learning framework for this purpose, and applied it to a clinical breast cancer Cherenkov dataset, and achieved robust, video-frame-rate processing.
Methods: A deep learning model was trained toward the binary segmentation task of bio-morphological features in Cherenkov imagery. A transfer learning strategy was implemented to address the challenge of limited annotated Cherenkov dataset. A retina fundus dataset, comprising 20,529 vascular-rich images with ground-truth vessel annotations, was utilized to pre-train a ResNet-based segmentation framework. This model was then fine-tuned using a clinical breast cancer Cherenkov dataset annotated for vasculature including 1,483 images from 212 treatment fractions of 19 patients.
Results: The framework was evaluated on Cherenkov images from 19 breast cancer patients, achieving automatic segmentation of clinical bio-morphological features, such as subcutaneous veins, scars, and pigmented skin. The model demonstrated exceptional performance, with a significantly faster processing speed (0.7 milliseconds vs. 2 minutes) and higher robustness (accuracy: 0.87 ± 0.029) compared to conventional manual segmentation method. It correspondingly exhibited superior sensitivity in detecting respiratory motions occurring within a 0.5-second timeframe in radiotherapy patients.
Conclusion: This study introduces a novel deep learning framework to automatically segment Cherenkov-imaged bio-morphological features for radiotherapy patients. Rapid and robust segmentation is essential for downstream clinical applications, including patient setup verification and dose monitoring. This work, once integrated with fast image registration, will lead to the applications of real-time monitoring and motion tracking based on the segmented features.