Automated Framework for Predicting Tumour Growth in Vestibular Schwannomas Using Contrast-Enhanced T1-Weighted MRI πŸ“

Author: Mehdi Amini, Minerva Becker, Simina Chiriac, Alexandre Cusin, Dimitrios Daskalou, Ghasem Hajianfar, Sophie Neveu, Marcella Pucci, Yazdan Salimi, Pascal Senn, Habib Zaidi πŸ‘¨β€πŸ”¬

Affiliation: Geneva University Hospital, Division of Radiology, Diagnostic Department, Geneva University Hospitals, Service of Otorhinolaryngology-Head and Neck Surgery, Department of Clinical Neurosciences, Geneva University Hospitals 🌍

Abstract:

Purpose: Personalized prediction of vestibular schwannoma (VS) tumour growth is crucial for guiding patient management decisions toward observation versus intervention. This study proposes an automated framework for the prediction of tumour growth using contrast-enhanced T1-weighted MRI. This framework combines a deep learning-based VS segmentation tool and a radiomics-machine learning based classifier.
Methods: A total of 193 VS patients (2014–2022) were screened, with 116 patients meeting inclusion criteria after excluding cases with no follow-up scans, scan intervals <90 days, or tumours <15 mmΒ³. Four radiologists manually segmented tumours to provide ground truth masks. Tumour growth was defined as a >20% annual volume increase (50 positive cases, 66 negative). For the segmentation model, a self-configuring nnU-Net was trained in 2000 epochs, using a decaying learning rate starting from 1e-2. The model’s performance was evaluated in a 5-fold cross-validation and Dice similarity score reported. Preprocessing of images for radiomics analysis included isotropic resampling (0.75 mmΒ³), bias field correction, and Z-score normalization. Pyradiomics extracted 107 features (first-order, shape, and texture), which were fed into 15 models, combining five feature selection (FS) and 10 machine learning (ML) methods. The dataset was split into three folds, using two for training and one for testing the model. The performance was reported in terms of area under the ROC curve (AUC), accuracy, sensitivity, and specificity.
Results: The segmentation model achieved average Dice similarity score of 84.6 Β± 5.51 over the five folds. The optimal radiomics model utilized ANOVA for FS and logistic regression for ML, achieving an AUC of 0.71, accuracy of 0.74, sensitivity of 0.65, and specificity of 0.82.
Conclusion: This study demonstrates the potential of deep learning for precise VS tumour segmentation on T1-weighted MRIs and highlights the capability of radiomics-based models to predict tumour growth, enabling improved personalized care for VS patients.

Back to List