Chest X-ray imaging technology used for the early detection and screening of COVID-19 pneumonia is both accessible worldwide and affordable compared to other non-invasive technologies. Additionally, deep learning methods have recently shown remarkable results in detecting COVID-19 on chest X-rays, making it a promising screening technology for COVID-19. Deep learning relies on a large amount of data to avoid overfitting. While overfitting can result in perfect modeling on the original training dataset, on a new testing dataset it can fail to achieve high accuracy. In the image processing field, an image augmentation step (i.e., adding more training data) is often used to reduce overfitting on the training dataset, and improve prediction accuracy on the testing dataset. In this paper, we examined the impact of geometric augmentations as implemented in several recent publications for detecting COVID-19. We compared the performance of 17 deep learning algorithms with and without different geometric augmentations. We empirically examined the influence of augmentation with respect to detection accuracy, dataset diversity, augmentation methodology, and network size. Contrary to expectation, our results show that the removal of recently used geometrical augmentation steps actually improved the Matthews correlation coefficient (MCC) of 17 models. The MCC without augmentation (MCC = 0.51) outperformed four recent geometrical augmentations (MCC = 0.47 for Data Augmentation 1, MCC = 0.44 for Data Augmentation 2, MCC = 0.48 for Data Augmentation 3, and MCC = 0.49 for Data Augmentation 4). When we retrained a recently published deep learning without augmentation on the same dataset, the detection accuracy significantly increased, with a χMcNemar′s statistic2=163.2 and a p-value of 2.23 × 10−37. This is an interesting finding that may improve current deep learning algorithms using geometrical augmentations for detecting COVID-19. We also provide clinical perspectives on geometric augmentation to consider regarding the development of a robust COVID-19 X-ray-based detector.
Chest radiography is a critical tool in the early detection, management planning, and follow-up evaluation of COVID-19 pneumonia; however, in smaller clinics around the world, there is a shortage of radiologists to analyze large number of examinations especially performed during a pandemic. Limited availability of high-resolution computed tomography and real-time polymerase chain reaction in developing countries and regions of high patient turnover also emphasizes the importance of chest radiography as both a screening and diagnostic tool. In this paper, we compare the performance of 17 available deep learning algorithms to help identify imaging features of COVID19 pneumonia. We utilize an existing diagnostic technology (chest radiography) and preexisting neural networks (DarkNet-19) to detect imaging features of COVID-19 pneumonia. Our approach eliminates the extra time and resources needed to develop new technology and associated algorithms, thus aiding the front-line healthcare workers in the race against the COVID-19 pandemic. Our results show that DarkNet-19 is the optimal pre-trained neural network for the detection of radiographic features of COVID-19 pneumonia, scoring an overall accuracy of 94.28% over 5,854 X-ray images. We also present a custom visualization of the results that can be used to highlight important visual biomarkers of the disease and disease progression.
Cardiovascular diseases (CVDs) have become the number 1 threat to human health. Their numerous complications mean that many countries remain unable to prevent the rapid growth of such diseases, although significant health resources have been invested toward their prevention and management. Electrocardiogram (ECG) is the most important non-invasive physiological signal for CVD screening and diagnosis. For exploring the heartbeat event classification model using single-or multiple-lead ECG signals, we proposed a novel deep learning algorithm and conducted a systemic comparison based on the different methods and databases. This new approach aims to improve accuracy and reduce training time by combining the convolutional neural network (CNN) with the bidirectional long short-term memory (BiLSTM). To our knowledge, this approach has not been investigated to date. In this study, Database I with single-lead ECG and Database II with 12-lead ECG were used to explore a practical and viable heartbeat event classification model. An evolutionary neural system approach (Method I) and a deep learning approach (Method II) that combines CNN with BiLSTM network were compared and evaluated in processing heartbeat event classification. Overall, Method I achieved slightly better performance than Method II. However, Method I took, on average, 28.3 h to train the model, whereas Method II needed only 1 h. Method II achieved an accuracy of 80, 82.6, and 85% compared with the China Physiological Signal Challenge 2018, PhysioNet Challenge 2017, and Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia datasets, respectively. These results are impressive compared with the performance of state-of-the-art algorithms used for the same purpose.
Electrocardiogram (ECG) signal evaluation is routinely used in clinics as a significant diagnostic method for detecting arrhythmia. However, it is very labor intensive to externally evaluate ECG signals, due to their small amplitude. Using automated detection and classification methods in the clinic can assist doctors in making accurate and expeditious diagnoses of diseases. In this study, we developed a classification method for arrhythmia based on the combination of a convolutional neural network and long short-term memory, which was then used to diagnose eight ECG signals, including a normal sinus rhythm. The ECG data of the experiment were derived from the MIT-BIH arrhythmia database. The experimental method mainly consisted of two parts. The input data of the model were two-dimensional grayscale images converted from one-dimensional signals, and detection and classification of the input data was carried out using the combined model. The advantage of this method is that it does not require performing feature extraction or noise filtering on the ECG signal. The experimental results showed that the implemented method demonstrated high classification performance in terms of accuracy, specificity, and sensitivity equal to 99.01%, 99.57%, and 97.67%, respectively. Our proposed model can assist doctors in accurately detecting arrhythmia during routine ECG screening.
Evaluating the performance of photoplethysmogram (PPG) event detection algorithms requires a large number of PPG signals with different noise levels and sampling frequencies. As publicly available PPG databases provide few options, artificially constructed PPG signals can also be used to facilitate this evaluation. Here, we propose a dynamic model to synthesize PPG over specified time durations and sampling frequencies. In this model, a single pulse was simulated by two Gaussian functions. Additionally, the beat-to-beat intervals were simulated using a normal distribution with a specific mean value and a specific standard deviation value. To add periodicity and to generate a complete signal, the circular motion principle was used. We synthesized three classes of pulses by emulating three different templates: excellent (systolic and diastolic waves are salient), acceptable (systolic and diastolic waves are not salient), and unfit (systolic and diastolic waves are noisy). The optimized model fitting of the Gaussian functions to the templates yielded 0.99, 0.98, and 0.85 correlations between the template and synthetic pulses for the excellent, acceptable, and unfit classes, respectively, with mean square errors of 0.001, 0.003, and 0.017, respectively. By comparing the heart rate variability of real PPG and randomly synthesized PPG for 5 min in 116 records from the MIMIC III database, strong correlations were found in SDNN,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.