Inter-Machine Harmonization in Echocardiographic Videos for Predicting Left Ventricular Ejection Fraction 📝

Author: Akihiro Haga, Ren Iwasaki, Kenya Kusunose, Makoto Miyake, Kenji Moriuchi, Yasuharu Takeda, Hidekazu Tanaka, Hirotsugu Yamada 👨‍🔬

Affiliation: Department of Cardiovascular Medicine, Nephrology, and Neurology Graduate School of Medicine, University of the Ryukyus, Graduate School of Biomedical Sciences, Tokushima University, Tokushima university, Division of Cardiovascular Medicine, Department of Internal Medicine, Kobe University Graduate School of Medicine, Department of Cardiology, Tenri Hospital, Department of Cardiovascular Medicine, Osaka University Graduate School of Medicine, Division of Heart Failure, Department of Heart Failure and Transplant, National Cerebral and Cardiovascular Center 🌍

Abstract:

Purpose: Device dependency is a significant challenge in medical AI, potentially limiting generalization performance. This study aimed to develop a robust deep learning model for predicting left ventricular ejection fraction (LVEF) while addressing device dependency using echocardiographic videos.

Methods: Echocardiographic videos from five facilities were categorized by vendors (GE: 1,911, PHILIPS: 804, CANON: 427) and analyzed across five chamber views. Extraneous textual and waveform information in the echocardiographic videos was removed using a preprocessing convolutional neural network (CNN) trained on manually annotated images containing essential cardiac features. Each cardiac cycle was divided with 20 frames for one beat cycle. For LVEF prediction, a 3D CNN was trained on the GE dataset. Data augmentation techniques such as gamma correction, scaling, median filtering, unsharp masking, translation, rotation, and image generation using cycleGAN were applied. Testing was performed by evaluating the mean absolute error (MAE). Final predictions were calculated as the mean or regression of values across views.

Results: The preprocessing network reduced device dependency, as demonstrated by improved histogram-based pixel intensity statistics (mean ± SD): GE (0.235 ± 0.0489 to 0.263 ± 0.0419), PHILIPS (0.235 ± 0.0643 to 0.273 ± 0.0564), and CANON (0.157 ± 0.0419 to 0.210 ± 0.0420). LVEF prediction showed a high performance, achieved MAE values of 4.33 for GE, 4.42 for PHILIPS, and 4.89 for CANON. No significant differences were found among vendors (Tukey’s test), suggesting that inter-machine dependence can be minimized using a combination of data augmentation techniques.

Conclusion: The preprocessing network and data augmentation reduced device dependency and minimized inter-device differences in LVEF prediction. This approach demonstrated potential for standardizing echocardiographic data nationwide, facilitating unified AI training.

Back to List