Author: David H. Abramson, Christopher Barker, Jasmine H. Francis, Meng Wei Ho, Yen-Po Lee, Haibo Lin, Hang Qi, Andy Shim, Charles B. Simone, Weihong Sun, Xiaoxuan Xu, Siyu Yang, Francis Yu, Anna Zhai šØāš¬
Affiliation: College of Machine Intelligence, Nankai University, New York Proton Center, Department of Biomedical Engineering, Johns Hopkins University, Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, Memorial Sloan Kettering Cancer Center š
Purpose: Proton therapy is an effective modality for treating ocular tumors such as uveal melanoma. We developed a novel cameraābased eyeātracking system to provide real-time noninvasive eye positioning and motion monitoring. The system track pupil coordinates, while the newly developed 3-D model dynamically simulates eye movement derived from pupil motion, quantifying corresponding variations in target coverage and potential dosimetric impact.
Methods: An infrared camera and light source point for pupil were employed. A deep learning workflow (Mediapipe-FaceMesh for eye localization, TensorFlow for pupil detection) calculates eye real-time coordinates. Based on planning CT and target contour, the Python routine reconstructs a 3D eyeball model and the clinical target volume (CTV), mapping the 2D pupil movements onto a patientāspecific ellipsoidal geometry. Besides the 2D displacements of pupil, the system computes target localization and compares it to prescription dose cloud. Daily coneābeam CT and kV imaging confirm the patientās alignment and provide a baseline for the tracking system. A toleranceābased alert issues warnings when deviations exceed predefined thresholds.
Results: The enhanced workflow achieves a 3D reconstruction accuracy of ā¤0.2 mm, validated via daily CBCT and kV imaging. Using physical information in the CBCT as calibration tools to align the eye and CTV in a shared 3D coordinate system, ensuring that subāmillimetric changes in ocular position are consistently detected. The system reliably measures displacements up to 1.4 mm (vs 3 mm planning margin) for the first test patient, with a 0.5-second response time. Although minor eye motions were observed, 2D pupil tracking and 3D target-position monitoring system confirmed the robust treatment during each beam delivery.
Conclusion: This ināhouse cameraābased system tracks realātime ocular motion and offers 3D prediction on target coverage for proton therapy, maintaining highāaccuracy alignment throughout treatment. Leveraging deep learning and an ellipsoidal 3D model, enable noninvasive, subāmillimetric monitoring of tumor position.