2022
DOI: 10.1101/2022.03.01.22271693
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Acoustic Dissection of Cough: Diving into Machine Listening-based COVID-19 Analysis and Detection

Abstract: Purpose: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19's transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge on the acoustic characteristics of COVID-19 cough sounds is … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 75 publications
0
5
0
Order By: Relevance
“…A common approach is to derive a selection of features that leads to the best detection performance ( 37 ), or to identify those features that contribute most to the final model output. The knowledge about specific speech features that are most essential for the ML algorithm to differentiate between patients with a certain disease and healthy speakers, could allow the physician to draw conclusions about potential voice-physiological atypicalities associated with the investigated disease ( 38 ). Alternatively, sonification represents a recently emerging XAI approach, in which sound is generated to auditorily demonstrate salient facets of learning data or relevant acoustic features to allow human listeners to follow the reasoning of an ML algorithm ( 39 ).…”
Section: Discussionmentioning
confidence: 99%
“…A common approach is to derive a selection of features that leads to the best detection performance ( 37 ), or to identify those features that contribute most to the final model output. The knowledge about specific speech features that are most essential for the ML algorithm to differentiate between patients with a certain disease and healthy speakers, could allow the physician to draw conclusions about potential voice-physiological atypicalities associated with the investigated disease ( 38 ). Alternatively, sonification represents a recently emerging XAI approach, in which sound is generated to auditorily demonstrate salient facets of learning data or relevant acoustic features to allow human listeners to follow the reasoning of an ML algorithm ( 39 ).…”
Section: Discussionmentioning
confidence: 99%
“…Rahman et al [21] used CAMBRIDGE and QATARI datasets and obtained a 96.5% accuracy score. Ren et al [22] used the COUGHVID dataset and obtained a 0.632 Unweighted Average Recall (UAR) score. Andreu-Perez et al [25] used their own dataset with CNN and obtained 97.18% and 96.64% average accuracy scores for COVID-19 and healthy classes.…”
Section: Discussionmentioning
confidence: 99%
“…The combinations of the Cambridge and the Qatari datasets were used, and a 96.5% accuracy score was obtained using 5 fold cross validation test. Ren et al [22] used a set of 6,373 acoustic features based on COMPARE analysis to detect COVID-19 based on cough sound analysis. Authors used various machine learning methods to classify the acoustic features and explored that MFCC and bear-essential-acoustic information were quite efficient in COVID-19 detection.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, artificial intelligence systems can be successfully applied in a wide range of fields from lip-reading applications [8] to document language [9] and gesture recognition [10], epileptic seizures [11] and heart disease detection [12]. Similarly, artificial intelligence systems based on cough sounds [13,14] and especially chest images (X-Ray and CT scan) are widely used in COVID-19 diagnosis [15,16].…”
Section: Introductionmentioning
confidence: 99%