In medical signal monitoring systems, our technique would accurately estimate heartbeat locations even when only a subset of channels are reliable.
Machine learning (ML) algorithms “learn” information directly from data, and their performance improves proportionally with the number of high-quality samples. The aim of our systematic review is to present the state of the art regarding the implementation of ML techniques in the management of heart failure (HF) patients. We manually searched MEDLINE and Cochrane databases as well the reference lists of the relevant review studies and included studies. Our search retrieved 122 relevant studies. These studies mainly refer to (a) the role of ML in the classification of HF patients into distinct categories which may require a different treatment strategy, (b) discrimination of HF patients from the healthy population or other diseases, (c) prediction of HF outcomes, (d) identification of HF patients from electronic records and identification of HF patients with similar characteristics who may benefit form a similar treatment strategy, (e) supporting the extraction of important data from clinical notes, and (f) prediction of outcomes in HF populations with implantable devices (left ventricular assist device, cardiac resynchronization therapy). We concluded that ML techniques may play an important role for the efficient construction of methodologies for diagnosis, management, and prediction of outcomes in HF patients. Electronic supplementary material The online version of this article (10.1007/s10741-020-10007-3) contains supplementary material, which is available to authorized users.
Background Accurate detection of arrhythmic events in the intensive care units (ICU) is of paramount significance in providing timely care. However, traditional ICU monitors generate a high rate of false alarms causing alarm fatigue. In this work, we develop an algorithm to improve life threatening arrhythmia detection in the ICUs using a deep learning approach. Methods and Results This study involves a total of 953 independent life‐threatening arrhythmia alarms generated from the ICU bedside monitors of 410 patients. Specifically, we used the ECG (4 channels), arterial blood pressure, and photoplethysmograph signals to accurately detect the onset and offset of various arrhythmias, without prior knowledge of the alarm type. We used a hybrid convolutional neural network based classifier that fuses traditional handcrafted features with features automatically learned using convolutional neural networks. Further, the proposed architecture remains flexible to be adapted to various arrhythmic conditions as well as multiple physiological signals. Our hybrid‐ convolutional neural network approach achieved superior performance compared with methods which only used convolutional neural network. We evaluated our algorithm using 5‐fold cross‐validation for 5 times and obtained an accuracy of 87.5%±0.5%, and a score of 81%±0.9%. Independent evaluation of our algorithm on the publicly available PhysioNet 2015 Challenge database resulted in overall classification accuracy and score of 93.9% and 84.3%, respectively, indicating its efficacy and generalizability. Conclusions Our method accurately detects multiple arrhythmic conditions. Suitable translation of our algorithm may significantly improve the quality of care in ICUs by reducing the burden of false alarms.
Risk stratification at the time of hospital admission is of paramount significance in triaging the patients and providing timely care. In the present study, we aim at predicting multiple clinical outcomes using the data recorded during admission to a cardiac care unit via an optimized machine learning method. This study involves a total of 11,498 patients admitted to a cardiac care unit over two years. Patient demographics, admission type (emergency or outpatient), patient history, lab tests, and comorbidities were used to predict various outcomes. We employed a fully connected neural network architecture and optimized the models for various subsets of input features. Using 10-fold cross-validation, our optimized machine learning model predicted mortality with a mean area under the receiver operating characteristic curve (AUC) of 0.967 (95% confidence interval (CI): 0.963–0.972), heart failure AUC of 0.838 (CI: 0.825–0.851), ST-segment elevation myocardial infarction AUC of 0.832 (CI: 0.821–0.842), pulmonary embolism AUC of 0.802 (CI: 0.764–0.84), and estimated the duration of stay (DOS) with a mean absolute error of 2.543 days (CI: 2.499–2.586) of data with a mean and median DOS of 6.35 and 5.0 days, respectively. Further, we objectively quantified the importance of each feature and its correlation with the clinical assessment of the corresponding outcome. The proposed method accurately predicts various cardiac outcomes and can be used as a clinical decision support system to provide timely care and optimize hospital resources.
As part of the PhysioNet/Computing in Cardiology Challenge 2017, this work focuses on the classification of a single channel short electrocardiogram (ECG) signal into normal, atrial fibrillation (AF), others and noise classes. To this end, we propose a shallow convolutional neural network architecture which learns suitable features pertaining to each class while eliminating the need to extract the traditionally used ad hoc features. In particular, we first developed a robust R-peak detector and stacked sequence of fixed number of detected beats with R-peaks aligned. These stack of beats corresponding to a segment of ECG record are classified into one of the four aforementioned classes. To improve the robustness, multiple classifiers were trained to classify these segments. Overall record classification was then generated using an voting scheme from the classification results of individual segments. Our best submission result during the official phase has a score of 71% with F1 scores of 86%, 73% and 56% respectively for normal, AF and other classes respectively.
PrefaceThe arrival of the so-called Petabyte Age has compelled the analytics community to pay serious attention to development of scalable algorithms for intelligent data analysis. In June 2008, Wired magazine featured a special section on "The Petabyte Age" and stated that "..our ability to capture, warehouse, and understand massive amounts of data is changing science, medicine, business, and technology." The recent explosion in social computing has added to the vastly growing amounts of data from which insights can be mined. The term "Big Data" is now emerging as a catch-all phrase to denote the vast amounts of data at a scale that requires a rethink of conventional notions of data management.There is a saying among data researchers that "more data beats better algorithms." Big Data provide ample opportunities to discern hitherto inconceivable insights from data sets. This, however, comes with significant challenges in terms of both computational and storage expense, of the type never addressed before. Volume, velocity, and variability in Big Data repositories necessitate advancing analytics beyond operational reporting and dashboards. Early attempts to address the issue of scalability were handled by development of incremental data mining algorithms. Other traditional approaches to solve scalability problems included sampling, processing data in batches, and development of parallel algorithms. However, it did not take long to realize that all of these approaches, except perhaps parallelization, have limited utility.The International Conference on Big Data Analytics (BDA 2012) was conceived against this backdrop, and is envisaged to provide a platform to expose researchers and practitioners to ground-breaking opportunities that arise during analysis and processing of massive volumes of distributed data stored across clusters of networked computers. The conference attracted a total of 42 papers, of which 37 were research track submissions. From these, five regular papers and five short papers were selected, leading to an acceptance rate of 27%.Four tutorials were also selected and two tutorials were included in the proceedings. The first tutorial entitled "Scalable Analytics: Algorithms and Systems" addresses implementation of three popular machine learning algorithms in a Map-Reduce environment. The second tutorial, "Big-Data: Theoretical, Engineering and Analytics Perspectives," gives a bird's eye view of the Big Data landscape, including technology, funding, and the emerging focus areas. It also deliberates on the analytical and theoretical perspectives of the ecosystem.The accepted research papers address several aspects of data analytics. These papers have been logically grouped into three broad sections: Data Analytics Applications, Knowledge Discovery Through Information Extraction, and Data Models in Analytics. VI PrefaceIn the first section, Basil et al. compare several statistical machine learning techniques over electro-cardiogram (ECG) datasets. Based on this study, they make recommendations on feat...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.