Accurate screening for septal defects is important for supporting radiologists' interpretative work. Some previous studies have proposed semantic segmentation and object detection approaches to carry out fetal heart detection; unfortunately, the models could not segment different objects of the same class. The semantic segmentation method segregates regions that only contain objects from the same class. In contrast, the fetal heart may contain multiple objects, such as the atria, ventricles, valves, and aorta. Besides, blurry boundaries (shadows) or a lack of consistency in the acquisition ultrasonography can cause wide variations. This study utilizes Mask-RCNN (MRCNN) to handle fetal ultrasonography images and employ it to detect and segment defects in heart walls with multiple objects. To our knowledge, this is the first study involving a medical application for septal defect detection using instance segmentation. The use of MRCNN architecture with ResNet50 as a backbone and a 0.0001 learning rate allows for two times faster training of the model on fetal heart images compared to other object detection methods, such as Faster-RCNN (FRCNN). We demonstrate a strong correlation between the predicted septal defects and ground truth as a mean average precision (mAP). As shown in the results, the proposed MRCNN model achieves good performance in multiclass detection of the heart chamber, with 97.59% for the right atrium, 99.67% for the left atrium, 86.17% for the left ventricle, 98.83% for the right ventricle, and 99.97% for the aorta. We also report competitive results for the defect detection of holes in the atria and ventricles via semantic and instance segmentation. The results show that the mAP for MRCNN is about 99.48% and 82% for FRCNN. We suggest that evaluation and prediction with our proposed model provide reliable detection of septal defects, including defects in the atria, ventricles, or both. These results suggest that the model used has a high potential to help cardiologists complete the initial screening for fetal congenital heart disease.
Accurate segmentation of fetal heart in echocardiography images is essential for detecting the structural abnormalities such as congenital heart defects (CHDs). Due to the wide variations attributed to different factors, such as maternal obesity, abdominal scars, amniotic fluid volume, and great vessel connections, this process is still a challenging problem. CHDs detection with expertise in general are substandard; the accuracy of measurements remains highly dependent on humans’ training, skills, and experience. To make such a process automatic, this study proposes deep learning-based computer-aided fetal heart echocardiography examinations with an instance segmentation approach, which inherently segments the four standard heart views and detects the defect simultaneously. We conducted several experiments with 1149 fetal heart images for predicting 24 objects, including four shapes of fetal heart standard views, 17 objects of heart-chambers in each view, and three cases of congenital heart defect. The result showed that the proposed model performed satisfactory performance for standard views segmentation, with a 79.97% intersection over union and 89.70% Dice coefficient similarity. It also performed well in the CHDs detection, with mean average precision around 98.30% for intra-patient variation and 82.42% for inter-patient variation. We believe that automatic segmentation and detection techniques could make an important contribution toward improving congenital heart disease diagnosis rates.
Delineating the electrocardiogram (ECG) waveform is an important step with high significance in cardiology diagnosis. It refers to extract the ECG morphology in start, peak, end points of waveform. Due to various shapes and abnormalities presented in ECG signals, several conventional computer algorithms always fail to extract the essential feature of heart information. Thus, it is critical to investigate an automated ECG signal delineation with its result accuracy. In this study, we propose the delineation process by using bidirectional long short-term memory (BiLSTM) classifier. Such process was conducted as one beat to the next (beat-to-beat), that means the ECG waveform classification is start of P-wave1 to start of P-wave2. However, such classifier lack of feature extraction process, reducing the classification accuracy result. To improve the classifier performance, convolutional layers as facture extraction are stacked with BiLSTM named ConvBiLSTM. We conducted the experimental based on seven-class ECG waveform using a publicly available QT Database with annotation of the main waveforms to produce high accurate classifier, i.e., P start -Pend, Pend -QRSstart, QRSstart -Rpeak, Rpeak -QRSend, QRSend -Tstart, Tstart -Tend, and Tend -Pstart. It was found that the proposed model showed remarkable results with overall average performances of 99.83% accuracy, 98.82% sensitivity, 99.90% specificity, 98.86% precision, and 98.84% F1 score. Based on these promising results, the efficacy of the proposed stacked ConvBiLSTM model in classifying ECG waveform provides a great opportunity to help cardiologists in diagnosis decision-making for faster assessment.
Background Generalization model capacity of deep learning (DL) approach for atrial fibrillation (AF) detection remains lacking. It can be seen from previous researches, the DL model formation used only a single frequency sampling of the specific device. Besides, each electrocardiogram (ECG) acquisition dataset produces a different length and sampling frequency to ensure sufficient precision of the R–R intervals to determine the heart rate variability (HRV). An accurate HRV is the gold standard for predicting the AF condition; therefore, a current challenge is to determine whether a DL approach can be used to analyze raw ECG data in a broad range of devices. This paper demonstrates powerful results for end-to-end implementation of AF detection based on a convolutional neural network (AFibNet). The method used a single learning system without considering the variety of signal lengths and frequency samplings. For implementation, the AFibNet is processed with a computational cloud-based DL approach. This study utilized a one-dimension convolutional neural networks (1D-CNNs) model for 11,842 subjects. It was trained and validated with 8232 records based on three datasets and tested with 3610 records based on eight datasets. The predicted results, when compared with the diagnosis results indicated by human practitioners, showed a 99.80% accuracy, sensitivity, and specificity. Result Meanwhile, when tested using unseen data, the AF detection reaches 98.94% accuracy, 98.97% sensitivity, and 98.97% specificity at a sample period of 0.02 seconds using the DL Cloud System. To improve the confidence of the AFibNet model, it also validated with 18 arrhythmias condition defined as Non-AF-class. Thus, the data is increased from 11,842 to 26,349 instances for three-class, i.e., Normal sinus (N), AF and Non-AF. The result found 96.36% accuracy, 93.65% sensitivity, and 96.92% specificity. Conclusion These findings demonstrate that the proposed approach can use unknown data to derive feature maps and reliably detect the AF periods. We have found that our cloud-DL system is suitable for practical deployment
Background The electrocardiogram (ECG) is a widely used diagnostic that observes the heart activities of patients to ascertain a heart abnormality diagnosis. The artifacts or noises are primarily associated with the problem of ECG signal processing. Conventional denoising techniques have been proposed in previous literature; however, some lacks, such as the determination of suitable wavelet basis function and threshold, can be a time-consuming process. This paper presents end-to-end learning using a denoising auto-encoder (DAE) for denoising algorithms and convolutional-bidirectional long short-term memory (ConvBiLSTM) for ECG delineation to classify ECG waveforms in terms of the PQRST-wave and isoelectric lines. The denoising reconstruction using unsupervised learning based on the encoder-decoder process can be proposed to improve the drawbacks. First, The ECG signals are reduced to a low-dimensional vector in the encoder. Second, the decoder reconstructed the signals. The last, the reconstructed signals of ECG can be processed to ConvBiLSTM. The proposed architecture of DAE-ConvBiLSTM is the end-to-end diagnosis of heart abnormality detection. Results As a result, the performance of DAE-ConvBiLSTM has obtained an average of above 98.59% accuracy, sensitivity, specificity, precision, and F1 score from the existing studies. The DAE-ConvBiLSTM has also experimented with detecting T-wave (due to ventricular repolarisation) morphology abnormalities. Conclusion The development architecture for detecting heart abnormalities using an unsupervised learning DAE and supervised learning ConvBiLSTM can be proposed for an end-to-end learning algorithm. In the future, the precise accuracy of the ECG main waveform will affect heart abnormalities detection in clinical practice.
The acute shortage of trained and experienced sonographers causes the detection of congenital heart defects (CHDs) extremely difficult. In order to minimize this difficulty, an accurate fetal heart segmentation to the early location of such structural heart abnormalities prior to delivery is essential. However, the segmentation process is not an easy task due to the small size of the fetal heart structure. Moreover, the manual task for identifying the standard cardiac planes, primarily based on a four-chamber view, requires a well-trained clinician and experience. In this paper, a CNN method using U-Net architecture was proposed to automate fetal cardiac standard planes segmentation from ultrasound images. A total of 519 fetal cardiac images was obtained from three videos. All data is divided into training and testing data. The testing data consist of 106 slices of the four-chamber segmentation tasks, i.e. atrial septal defect (ASD), ventricular septal defect (VSD), and normal. The segmentation of the post-processing method is needed to enhanced the segmentation result. In this paper, a combination technique with U-Net and Otsu thresholding gives the best performances with 99.48%-pixel accuracy, 96.73% mean accuracy, 94.92% mean intersection over union, and 0.21% segmentation error. In the future, the implementation of Deep Learning in the study of CHDs holds significant potential for identifying novel CHDs in heterogeneous fetal hearts.
Background Electrocardiogram (ECG) signal classification plays a critical role in the automatic diagnosis of heart abnormalities. While most ECG signal patterns cannot be recognized by a human interpreter, they can be detected with precision using artificial intelligence approaches, making the ECG a powerful non-invasive biomarker. However, performing rapid and accurate ECG signal classification is difficult due to the low amplitude, complexity, and non-linearity. The widely-available deep learning (DL) method we propose has presented an opportunity to substantially improve the accuracy of automated ECG classification analysis using rhythm or beat features. Unfortunately, a comprehensive and general evaluation of the specific DL architecture for ECG analysis across a wide variety of rhythm and beat features has not been previously reported. Some previous studies have been concerned with detecting ECG class abnormalities only through rhythm or beat features separately. Methods This study proposes a single architecture based on the DL method with one-dimensional convolutional neural network (1D-CNN) architecture, to automatically classify 24 patterns of ECG signals through both rhythm and beat. To validate the proposed model, five databases which consisted of nine-class of ECG-base rhythm and 15-class of ECG-based beat were used in this study. The proposed DL network was applied and studied with varying datasets with different frequency samplings in intra and inter-patient scheme. Results Using a 10-fold cross-validation scheme, the performance results had an accuracy of 99.98%, a sensitivity of 99.90%, a specificity of 99.89%, a precision of 99.90%, and an F1-score of 99.99% for ECG rhythm classification. Additionally, for ECG beat classification, the model obtained an accuracy of 99.87%, a sensitivity of 96.97%, a specificity of 99.89%, a precision of 92.23%, and an F1-score of 94.39%. In conclusion, this study provides clinicians with an advanced methodology for detecting and discriminating heart abnormalities between different ECG rhythm and beat assessments by using one outstanding proposed DL architecture.
Early prenatal screening with an ultrasound (US) can significantly lower newborn mortality caused by congenital heart diseases (CHDs). However, the need for expertise in fetal cardiologists and the high volume of screening cases limit the practically achievable detection rates. Hence, automated prenatal screening to support clinicians is desirable. This paper presents and analyses potential deep learning (DL) techniques to diagnose CHDs in fetal USs. Four convolutional neural network architectures were compared to select the best classifier with satisfactory results. Hence, dense convolutional network (DenseNet) 201 architecture was selected for the classification of seven CHDs, such as ventricular septal defect, atrial septal defect, atrioventricular septal defect, Ebstein’s anomaly, tetralogy of Fallot, transposition of great arteries, hypoplastic left heart syndrome, and a normal control. The sensitivity, specificity, and accuracy of the DenseNet201 model were 100%, 100%, and 100%, respectively, for the intra-patient scenario and 99%, 97%, and 98%, respectively, for the inter-patient scenario. We used the intra-patient DL prediction model to validate our proposed model against the prediction results of three expert fetal cardiologists. The proposed model produces a satisfactory result, which means that our model can support expert fetal cardiologists to interpret the decision to improve CHD diagnostics. This work represents a step toward the goal of assisting front-line sonographers with CHD diagnoses at the population level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.