2021
DOI: 10.1007/978-3-030-87802-3_18
|View full text |Cite
|
Sign up to set email alerts
|

An Ensemble Approach for the Diagnosis of COVID-19 from Speech and Cough Sounds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…The previous literature has shown that classification of COVID-19 using acoustic signatures is indeed possible: Laguarta et al [ 9 ] achieved a 93.8% accuracy on forced-cough recordings with parallel ResNet50 deep learning architectures; Imran et al [ 10 ] used three parallel classifier systems with a mediator to achieve a final accuracy of 92.64% (though the app predicted an inconclusive test result 38.7% of the time, which was not accounted for in the accuracy); Pahar et al [ 11 ] applied transfer learning on a pre-trained ResNet50 architecture to achieve accuracies above 92% for cough, speech, and breathing sounds; and Pinkas et al [ 12 ] used a three stage deep learning architecture to correctly identify 71% of positive patients. The release of public datasets, such as Coswara/DiCOVA Challenge [ 13 , 14 ], University of Cambridge/NeurIPS 2021 [ 15 ], and COUGHVID [ 16 ] has dramatically accelerated the development and release of new classification approaches with reported area-under-the-curve of the receiver operating curve (AUC-ROC) ranging from 0.60 to 0.95 [ 17 , 18 , 19 , 20 ]. Previously, the authors have also presented early work on the Coswara Dataset [ 21 ] that was the top performer in the breathing and cough tracks of the Second DiCOVA Challenge, achieving an AUC-ROC of 0.87 and 0.82, respectively [ 22 ].…”
Section: Introductionmentioning
confidence: 99%
“…The previous literature has shown that classification of COVID-19 using acoustic signatures is indeed possible: Laguarta et al [ 9 ] achieved a 93.8% accuracy on forced-cough recordings with parallel ResNet50 deep learning architectures; Imran et al [ 10 ] used three parallel classifier systems with a mediator to achieve a final accuracy of 92.64% (though the app predicted an inconclusive test result 38.7% of the time, which was not accounted for in the accuracy); Pahar et al [ 11 ] applied transfer learning on a pre-trained ResNet50 architecture to achieve accuracies above 92% for cough, speech, and breathing sounds; and Pinkas et al [ 12 ] used a three stage deep learning architecture to correctly identify 71% of positive patients. The release of public datasets, such as Coswara/DiCOVA Challenge [ 13 , 14 ], University of Cambridge/NeurIPS 2021 [ 15 ], and COUGHVID [ 16 ] has dramatically accelerated the development and release of new classification approaches with reported area-under-the-curve of the receiver operating curve (AUC-ROC) ranging from 0.60 to 0.95 [ 17 , 18 , 19 , 20 ]. Previously, the authors have also presented early work on the Coswara Dataset [ 21 ] that was the top performer in the breathing and cough tracks of the Second DiCOVA Challenge, achieving an AUC-ROC of 0.87 and 0.82, respectively [ 22 ].…”
Section: Introductionmentioning
confidence: 99%