Interspeech 2021 2021
DOI: 10.21437/interspeech.2021-19
|View full text |Cite
|
Sign up to set email alerts
|

The INTERSPEECH 2021 Computational Paralinguistics Challenge: COVID-19 Cough, COVID-19 Speech, Escalation & Primates

Abstract: The INTERSPEECH 2021 Computational Paralinguistics Challenge addresses four different problems for the first time in a research competition under well-defined conditions: In the COVID-19 Cough and COVID-19 Speech Sub-Challenges, a binary classification on COVID-19 infection has to be made based on coughing sounds and speech; in the Escalation Sub-Challenge, a three-way assessment of the level of escalation in a dialogue is featured; and in the Primates Sub-Challenge, four species vs background need to be class… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
71
0
3

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 75 publications
(81 citation statements)
references
References 10 publications
0
71
0
3
Order By: Relevance
“…In the INTERPSEECH 2021 ComParE ( 19 ), the CCS provides a dataset from the crowd-sourced Cambridge COVID-19 Sound database ( 15 ). The participants are asked to provide one to three forced coughs in each recording via one of the following multiple platforms: A web interface, an Android app, and an iOS app.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In the INTERPSEECH 2021 ComParE ( 19 ), the CCS provides a dataset from the crowd-sourced Cambridge COVID-19 Sound database ( 15 ). The participants are asked to provide one to three forced coughs in each recording via one of the following multiple platforms: A web interface, an Android app, and an iOS app.…”
Section: Resultsmentioning
confidence: 99%
“…With the chosen pre-trained COUGHVID models and their strategies (layer(s)/block(s) number and transfer learning strategies), we transfer the parameters or embeddings of the above chosen COUGHVID models to the current train-from-scratch models on the ComParE and DiCOVA datasets during the training. Finally, we choose the best results to compete with official baselines: the average validation AUC 68.81% ( 20 ) for the DiCOVA Track-1 dataset, and test UAR without fusion 64.7% ( 19 ) for ComParE CCS. Similarly, when training models from scratch or applying the incorporating embeddings method, we set the initial learning rate as 0.001, whereas if the transferring parameters are utilised, the initial learning rate is set as 0.0001.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The research applied within this contribution is a continuation of experiments, which were included as part of the "Escalation" sub-challenge for the INTERSPEECH 2021 COMPARE Challenge (Schuller et al, 2021). For the experiments that were part of the challenge, two out of the three datasets utilized in this study were applied in a similar cross-corpus manner, where one was used for training and the other for testing.…”
Section: Related Workmentioning
confidence: 99%