The PhysioNet/Computing in Cardiology (CinC) Challenge 2017 focused on differentiating AF from noise, normal or other rhythms in short term (from 9–61 s) ECG recordings performed by patients. A total of 12,186 ECGs were used: 8,528 in the public training set and 3,658 in the private hidden test set. Due to the high degree of inter-expert disagreement between a significant fraction of the expert labels we implemented a mid-competition bootstrap approach to expert relabeling of the data, levering the best performing Challenge entrants’ algorithms to identify contentious labels. A total of 75 independent teams entered the Challenge using a variety of traditional and novel methods, ranging from random forests to a deep learning approach applied to the raw data in the spectral domain. Four teams won the Challenge with an equal high F1 score (averaged across all classes) of 0.83, although the top 11 algorithms scored within 2% of this. A combination of 45 algorithms identified using LASSO achieved an F1 of 0.87, indicating that a voting approach can boost performance.
In the past few decades, analysis of heart sound signals (i.e. the phonocardiogram or PCG), especially for automated heart sound segmentation and classification, has been widely studied and has been reported to have the potential value to detect pathology accurately in clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of high-quality, rigorously validated, and standardized open databases of heart sound recordings. This paper describes a public heart sound database, assembled for an international competition, the PhysioNet/Computing in Cardiology (CinC) Challenge 2016. The archive comprises nine different heart sound databases sourced from multiple research groups around the world. It includes 2435 heart sound recordings in total collected from 1297 healthy subjects and patients with a variety of conditions, including heart valve disease and coronary artery disease. The recordings were collected from a variety of clinical or nonclinical (such as in-home visits) environments and equipment. The length of recording varied from several seconds to several minutes. This article reports detailed information about the subjects/patients including demographics (number, age, gender), recordings (number, location, state and time length), associated synchronously recorded signals, sampling frequency and sensor type used. We also provide a brief summary of the commonly used heart sound segmentation and classification methods, including open source code provided concurrently for the Challenge. A description of the PhysioNet/CinC Challenge 2016, including the main aims, the training and test sets, the hand corrected annotations for different heart sound states, the scoring mechanism, and associated open source code are provided. In addition, several potential benefits from the public heart sound database are discussed.
High false alarm rates in the ICU decrease quality of care by slowing staff response times while increasing patient delirium through noise pollution. The 2015 Physio-Net/Computing in Cardiology Challenge provides a set of 1,250 multi-parameter ICU data segments associated with critical arrhythmia alarms, and challenges the general research community to address the issue of false alarm suppression using all available signals. Each data segment was 5 minutes long (for real time analysis), ending at the time of the alarm. For retrospective analysis, we provided a further 30 seconds of data after the alarm was triggered. A collection of 750 data segments was made available for training and a set of 500 was held back for testing. Each alarm was reviewed by expert annotators, at least two of whom agreed that the alarm was either true or false. Challenge participants were invited to submit a complete, working algorithm to distinguish true from false alarms, and received a score based on their program’s performance on the hidden test set. This score was based on the percentage of alarms correct, but with a penalty that weights the suppression of true alarms five times more heavily than acceptance of false alarms. We provided three example entries based on well-known, open source signal processing algorithms, to serve as a basis for comparison and as a starting point for participants to develop their own code. A total of 38 teams submitted a total of 215 entries in this year’s Challenge.
Despite the important advances achieved in the field of adult electrocardiography signal processing, the analysis of the non-invasive fetal electrocardiogram (NI-FECG) remains a challenge. Currently no gold standard database exists which provides labelled FECG QRS complexes (and other morphological parameters), and publications rely either on proprietary databases or a very limited set of data recorded from few (or more often, just one) individuals. The PhysioNet/Computing in Cardiology Challenge 2013 enables to tackle some of these limitations by releasing a set of NI-FECG data publicly to the scientific community in order to evaluate signal processing techniques for NI-FECG extraction. The Challenge aim was to encourage development of accurate algorithms for locating QRS complexes and estimating the QT interval in noninvasive FECG signals. Using carefully reviewed reference QRS annotations and QT intervals as a gold standard, based on simultaneous direct FECG when possible, the Challenge was designed to measure and compare the performance of participants’ algorithms objectively. Multiple challenge events were designed to test basic FHR estimation accuracy, as well as accuracy in measurement of inter-beat (RR) and QT intervals needed as a basis for derivation of other FECG features. This editorial reviews the background issues, the design of the Challenge, the key achievements, and the follow-up research generated as a result of the Challenge, published in the concurrent special issue of Physiological Measurement.
In the past few decades heart sound signals (i.e., phonocardiograms or PCGs) have been widely studied. Automated heart sound segmentation and classification techniques have the potential to screen for pathologies in a variety of clinical applications. However, comparative analyses of algorithms in the literature have been hindered by the lack of a large and open database of heart sound recordings. The PhysioNet/Computing in Cardiology (CinC) Challenge 2016 addresses this issue by assembling the largest public heart sound database, aggregated from eight sources obtained by seven independent research groups around the world. The database includes 4,430 recordings taken from 1,072 subjects, totalling 233,512 heart sounds collected from both healthy subjects and patients with a variety of conditions such as heart valve disease and coronary artery disease. These recordings were collected using heterogeneous equipment in both clinical and nonclinical (such as in-home visits). The length of recording varied from several seconds to several minutes. Additional data provided include subject demographics (age and gender), recording information (number per patient, body location, and length of recording), synchronously recorded signals (such as ECG), sampling frequency and sensor type used. Participants were asked to classify recordings as normal, abnormal, or not possible to evaluate (noisy/uncertain). The overall score for an entry was based on a weighted sensitivity and specificity score with respect to manual expert annotations. A brief description of a baseline classification method is provided, including a description of open source code, which has been provided in association with the Challenge. The open source code provided a score of 0.71 (Se=0.65 Sp=0.76). During the official phase of the competition, a total of 48 teams submitted 348 open source entries, with a highest score of 0.86 (Se=0.94 Sp=0.78).
BackgroundThe detection of change in magnitude of directional coupling between two non-linear time series is a common subject of interest in the biomedical domain, including studies involving the respiratory chemoreflex system. Although transfer entropy is a useful tool in this avenue, no study to date has investigated how different transfer entropy estimation methods perform in typical biomedical applications featuring small sample size and presence of outliers.MethodsWith respect to detection of increased coupling strength, we compared three transfer entropy estimation techniques using both simulated time series and respiratory recordings from lambs. The following estimation methods were analyzed: fixed-binning with ranking, kernel density estimation (KDE), and the Darbellay-Vajda (D-V) adaptive partitioning algorithm extended to three dimensions. In the simulated experiment, sample size was varied from 50 to 200, while coupling strength was increased. In order to introduce outliers, the heavy-tailed Laplace distribution was utilized. In the lamb experiment, the objective was to detect increased respiratory-related chemosensitivity to O2 and CO2 induced by a drug, domperidone. Specifically, the separate influence of end-tidal PO2 and PCO2 on minute ventilation (trueV˙E) before and after administration of domperidone was analyzed.ResultsIn the simulation, KDE detected increased coupling strength at the lowest SNR among the three methods. In the lamb experiment, D-V partitioning resulted in the statistically strongest increase in transfer entropy post-domperidone for PO2MathClass-rel→trueV˙E. In addition, D-V partitioning was the only method that could detect an increase in transfer entropy for PCO2→trueV˙E, in agreement with experimental findings.ConclusionsTransfer entropy is capable of detecting directional coupling changes in non-linear biomedical time series analysis featuring a small number of observations and presence of outliers. The results of this study suggest that fixed-binning, even with ranking, is too primitive, and although there is no clear winner between KDE and D-V partitioning, the reader should note that KDE requires more computational time and extensive parameter selection than D-V partitioning. We hope this study provides a guideline for selection of an appropriate transfer entropy estimation method.
The WaveForm DataBase (WFDB) Toolbox for MATLAB/Octave enables integrated access to PhysioNet’s software and databases. Using the WFDB Toolbox for MATLAB/Octave, users have access to over 50 physiological databases in PhysioNet. The toolbox provides access over 4 TB of biomedical signals including ECG, EEG, EMG, and PLETH. Additionally, most signals are accompanied by metadata such as medical annotations of clinical events: arrhythmias, sleep stages, seizures, hypotensive episodes, etc. Users of this toolbox should easily be able to reproduce, validate, and compare results published based on PhysioNet’s software and databases.
Gait speed is a powerful clinical marker for mobility impairment in patients suffering from neurological disorders. However, assessment of gait speed in coordination with delivery of comprehensive care is usually constrained to clinical environments and is often limited due to mounting demands on the availability of trained clinical staff. These limitations in assessment design could give rise to poor ecological validity and limited ability to tailor interventions to individual patients. Recent advances in wearable sensor technologies have fostered the development of new methods for monitoring parameters that characterize mobility impairment, such as gait speed, outside the clinic, and therefore address many of the limitations associated with clinical assessments. However, these methods are often validated using normal gait patterns; and extending their utility to subjects with gait impairments continues to be a challenge. In this paper, we present a machine learning method for estimating gait speed using a configurable array of skin-mounted, conformal accelerometers. We establish the accuracy of this technique on treadmill walking data from subjects with normal gait patterns and subjects with multiple sclerosis-induced gait impairments. For subjects with normal gait, the best performing model systematically overestimates speed by only 0.01 m/s, detects changes in speed to within less than 1%, and achieves a root-mean-square-error of 0.12 m/s. Extending these models trained on normal gait to subjects with gait impairments yields only minor changes in model performance. For example, for subjects with gait impairments, the best performing model systematically overestimates speed by 0.01 m/s, quantifies changes in speed to within 1%, and achieves a root-mean-square-error of 0.14 m/s. Additional analyses demonstrate that there is no correlation between gait speed estimation error and impairment severity, and that the estimated speeds maintain the clinical significance of ground truth speed in this population. These results support the use of wearable accelerometer arrays for estimating walking speed in normal subjects and their extension to MS patient cohorts with gait impairment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.