Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
The auscultation of heart sounds has proven advantageous for the early diagnosis of cardiovascular conditions. Various methods have been proposed for the automatic analysis of heart sounds to reduce subjectivity in diagnosis and alleviate physicians' workload. However, the effectiveness of these methods heavily depends on the amount and quality of the heart sound datasets used and the availability of publicly accessible datasets that include the most common and difficult classes. In this study, we introduce the HeartWave dataset, a comprehensive heart sound dataset comprising recordings from nine distinct classes of the most common heart sounds from all classes and subclasses of cardiovascular diseases, documented, with enough samples, good quality, and well labelled, with a focus on the hard and difficult cases of diagnosis. The dataset includes a total of 1353 recordings of heart sounds. Notably, this dataset includes extremely rare and difficult-to-diagnose classes. In order to establish a reliable reference standard, a team of experienced cardiologists actively participated in the entire annotation process. The length of audio recordings is substantial, allowing for the extraction of multiple heartbeats from a single recording through the use of segmentation techniques. Moreover, our dataset takes into consideration the standard cardiologist practices to enable the capture of specific heart sounds associated with corresponding clinical locations. According to our post analysis of the dataset, the average signal-to-noise ratio of our proposed dataset surpasses that of the widely known PhysioNet/CinC 2016 public dataset by about two folds, ensuring a cleaner acoustic signal. Our proposed dataset provides a valuable resource for training and evaluating machine learning models aimed at automated heart sound classification and diagnosis.
The auscultation of heart sounds has proven advantageous for the early diagnosis of cardiovascular conditions. Various methods have been proposed for the automatic analysis of heart sounds to reduce subjectivity in diagnosis and alleviate physicians' workload. However, the effectiveness of these methods heavily depends on the amount and quality of the heart sound datasets used and the availability of publicly accessible datasets that include the most common and difficult classes. In this study, we introduce the HeartWave dataset, a comprehensive heart sound dataset comprising recordings from nine distinct classes of the most common heart sounds from all classes and subclasses of cardiovascular diseases, documented, with enough samples, good quality, and well labelled, with a focus on the hard and difficult cases of diagnosis. The dataset includes a total of 1353 recordings of heart sounds. Notably, this dataset includes extremely rare and difficult-to-diagnose classes. In order to establish a reliable reference standard, a team of experienced cardiologists actively participated in the entire annotation process. The length of audio recordings is substantial, allowing for the extraction of multiple heartbeats from a single recording through the use of segmentation techniques. Moreover, our dataset takes into consideration the standard cardiologist practices to enable the capture of specific heart sounds associated with corresponding clinical locations. According to our post analysis of the dataset, the average signal-to-noise ratio of our proposed dataset surpasses that of the widely known PhysioNet/CinC 2016 public dataset by about two folds, ensuring a cleaner acoustic signal. Our proposed dataset provides a valuable resource for training and evaluating machine learning models aimed at automated heart sound classification and diagnosis.
Cardiovascular disease is a significant cause of death worldwide, emphasizing the crucial need for timely detection and diagnosis of heart abnormalities. This study presents a new approach that utilizes deep learning models to diagnose cardiac issues by analyzing raw phonocardiogram (PCG) signals. The proposed method introduces a novel technique called custom scalogram-based convolutional recurrent neural network (CS-CRNN). Diverging from conventional techniques, this model directly handles the raw PCG signals. These signals undergo a transformation into scalogram images within the initial layer of the CRNN architecture, without incorporating any learnable parameters. The results obtained from the CS-CRNN model are compared with traditional feature-based recurrent neural network (RNN) models.The comparison demonstrates comparable performance in both binary classification (normal and abnormal categories) and multiclass classification (5 categories). The CS-CRNN model directly handles raw PCG data and employs data augmentation to enhance performance on small datasets. It achieves an accuracy of 99.6% for binary classification and 98.6% and 99.7% before and after optimization for multiclass classification on the augmented dataset. The results show that the CS-CRNN model offers comparable performance to traditional methods, making it a promising tool for diagnosing cardiac abnormalities.
Diagnosis of Ocular toxoplasmosis (OT) usually involves clinical examination and imaging, which can be expensive and require specialized personnel. The use of artificial intelligence (AI) to analyze fundus images for diagnosing ocular diseases is gaining traction. Despite that, there has not been much work done focusing on the detection of OT. To address this issue, we conducted a benchmark study that evaluates the effectiveness of existing pre-trained networks using transfer learning techniques to detect and segment OT lesions from fundus images. The goal of this study is to provide insights for future researchers interested in harnessing deep learning (DL) techniques for automated, easy-to-use, and precise diagnostic approaches of OT using retinal fundus images. Along with that, we have performed an in-depth analysis of different feature extraction techniques to find the most optimal one for the classification and segmentation of lesions. For classification tasks, we have evaluated pre-trained models such as VGG16, MobileNetV2, InceptionV3, ResNet50, and DenseNet121 models. Among them, MobileNetV2 outperformed all other models in terms of Accuracy (Acc.), Recall, and F1-Score outperforming the second-best InceptionV3 by 0.7% higher Acc. However, DenseNet121 achieved the best result in terms of Precision, which was 0.1% higher than MobileNetV2. For the segmentation task, we replaced the encoder block of the U-Net with pretrained MobileNetV2, InceptionV3, ResNet34, and VGG16 and trained with two different loss functions (Dice loss and Jaccard loss). The MobileNetV2/U-Net outperformed ResNet34 by 0.5% and 2.1% in terms of Acc. and Dice Score, respectively when the most optimum Jaccard loss function is employed during the training. The results mentioned in this study verify the effectiveness of the DL techniques in the diagnosis of OT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.