Left ventricular dilatation (LVD) and left ventricular hypertrophy (LVH) are risk factors for heart failure, and their detection improves heart failure screening. This study aimed to investigate the ability of deep learning to detect LVD and LVH from a 12-lead electrocardiogram (ECG). Using ECG and echocardiographic data, we developed deep learning and machine learning models to detect LVD and LVH. We also examined conventional ECG criteria for the diagnosis of LVH. We calculated the area under the receiver operating characteristic (AUROC) curve, sensitivity, specificity, and accuracy of each model and compared the performance of the models. We analyzed data for 18,954 patients (mean age (standard deviation): 64.2 (16.5) years, men: 56.7%). For the detection of LVD, the value (95% confidence interval) of the AUROC was 0.810 (0.801-0.819) for the deep learning model, and this was significantly higher than that of the logistic regression and random forest methods (P < 0.001). The AUROCs for the logistic regression and random forest methods (machine learning models) were 0.770 (0.761-0.779) and 0.757 (0.747-0.767), respectively. For the detection of LVH, the AUROC was 0.784 (0.777-0.791) for the deep learning model, and this was significantly higher than that of the logistic regression and random forest methods and conventional ECG criteria (P < 0.001). The AUROCs for the logistic regression and random forest methods were 0.758 (0.751-0.765) and 0.716 (0.708-0.724), respectively. This study suggests that deep learning is a useful method to detect LVD and LVH from 12-lead ECGs.
Background: Aortic regurgitation (AR) is a common heart disease, with a relatively high prevalence of 4.9% in the Framingham Heart Study. Because the prevalence increases with advancing age, an upward shift in the age distribution may increase the burden of AR. To provide an effective screening method for AR, we developed a deep learning-based artificial intelligence algorithm for the diagnosis of significant AR using electrocardiography (ECG). Methods: Our dataset comprised 29,859 paired data of ECG and echocardiography, including 412 AR cases, from January 2015 to December 2019. This dataset was divided into training, validation, and test datasets. We developed a multi-input neural network model, which comprised a two-dimensional convolutional neural network (2D-CNN) using raw ECG data and a fully connected deep neural network (FC-DNN) using ECG features, and compared its performance with the performances of a 2D-CNN model and other machine learning models. In addition, we used gradient-weighted class activation mapping (Grad-CAM) to identify which parts of ECG waveforms had the most effect on algorithm decision making. Results: The area under the receiver operating characteristic curve of the multi-input model (0.802; 95% CI, 0.762-0.837) was significantly greater than that of the 2D-CNN model alone (0.734; 95% CI, 0.679-0.783; p < 0.001) and those of other machine learning models. Grad-CAM demonstrated that the multiinput model tended to focus on the QRS complex in leads I and aVL when detecting AR. Conclusions: The multi-input deep learning model using 12-lead ECG data could detect significant AR with modest predictive value.
Deep learning models can be applied to electrocardiograms (ECGs) to detect left ventricular (LV) dysfunction. We hypothesized that applying a deep learning model may improve the diagnostic accuracy of cardiologists in predicting LV dysfunction from ECGs. We acquired 37,103 paired ECG and echocardiography data records of patients who underwent echocardiography between January 2015 and December 2019. We trained a convolutional neural network to identify the data records of patients with LV dysfunction (ejection fraction < 40%) using a dataset of 23,801 ECGs. When tested on an independent set of 7,196 ECGs, we found the area under the receiver operating characteristic curve was 0.945 (95% confidence interval: 0.936-0.954). When 7 cardiologists interpreted 50 randomly selected ECGs from the test dataset of 7,196 ECGs, their accuracy for predicting LV dysfunction was 78.0% ± 6.0%. By referring to the model's output, the cardiologist accuracy improved to 88.0% ± 3.7%, which indicates that model support significantly improved the cardiologist diagnostic accuracy (P = 0.02). A sensitivity map demonstrated that the model focused on the QRS complex when detecting LV dysfunction on ECGs. We developed a deep learning model that can detect LV dysfunction on ECGs with high accuracy. Furthermore, we demonstrated that support from a deep learning model can help cardiologists to identify LV dysfunction on ECGs.
The present study was performed to ascertain the annual changes in the number and nature of traumatic head injuries that occur in high-school rugby matches, and as an exploratory investigation of approaches to improve safety in the future. It was based on injury reports submitted at the time of injury to the Kansai Rugby Football Union between Apr. 2009 and Mar. 2016. The finding was that the mean number of reported cases of traumatic head injury per year during the three pre-amendment years from Apr. 2009 to Mar. 2011 was 18.0, whereas that during the five post-amendment years from Apr. 2012 to Mar. 2016 was 36.2. Of all the traumatic head injuries, those with the highest numbers and proportions of cases for each of the four factors were as follows: (i) occasion of injury: during a match, 115 (48.9%); (ii) condition of the pitch: grass, 105 (44.7%); (iii) school grade: 2, 104 (44.3%); and (iv) cause of injury: tackling, 115 (48.9%). In addition, the odds ratios (ORs) for brain concussion at post-amendment as compared with pre-amendment and for occurrence on grass as compared with on soil were significant 2.83. An exploratory investigation was conducted to clarify whether different factors are associated with the severity of pre-and post-amendment injuries, but no significant ORs were found. In conclusion, the establishment of guidelines related to brain concussion in 2012 increased the number of reports of injuries due to high-school rugby and had a definite effect on prompt treatment of brain concussions.
Intravascular ultrasound (IVUS) is a diagnostic modality used during percutaneous coronary intervention. However, specialist skills are required to interpret IVUS images. To address this issue, we developed a new artificial intelligence (AI) program that categorizes vessel components, including calcification and stents, seen in IVUS images of complex lesions. When developing our AI using U-Net, IVUS images were taken from patients with angina pectoris and were manually segmented into the following categories: lumen area, medial plus plaque area, calcification, and stent. To evaluate our AI’s performance, we calculated the classification accuracy of vessel components in IVUS images of vessels with clinically significantly narrowed lumina (< 4 mm2) and those with severe calcification. Additionally, we assessed the correlation between lumen areas in manually-labeled ground truth images and those in AI-predicted images, the mean intersection over union (IoU) of a test set, and the recall score for detecting stent struts in each IVUS image in which a stent was present in the test set. Among 3738 labeled images, 323 were randomly selected for use as a test set. The remaining 3415 images were used for training. The classification accuracies for vessels with significantly narrowed lumina and those with severe calcification were 0.97 and 0.98, respectively. Additionally, there was a significant correlation in the lumen area between the ground truth images and the predicted images (ρ = 0.97, R2 = 0.97, p < 0.001). However, the mean IoU of the test set was 0.66 and the recall score for detecting stent struts was 0.64. Our AI program accurately classified vessels requiring treatment and vessel components, except for stents in IVUS images of complex lesions. AI may be a powerful tool for assisting in the interpretation of IVUS imaging and could promote the popularization of IVUS-guided percutaneous coronary intervention in a clinical setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.