2020
DOI: 10.1007/s41233-020-00042-1
|View full text |Cite
|
Sign up to set email alerts
|

Towards speech quality assessment using a crowdsourcing approach: evaluation of standardized methods

Abstract: Subjective speech quality assessment has traditionally been carried out in laboratory environments under controlled conditions. With the advent of crowdsourcing platforms tasks, which need human intelligence, can be resolved by crowd workers over the Internet. Crowdsourcing also offers a new paradigm for speech quality assessment, promising higher ecological validity of the quality judgments at the expense of potentially lower reliability. This paper compares laboratory-based and crowdsourcing-based speech qua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 18 publications
(16 citation statements)
references
References 29 publications
0
16
0
Order By: Relevance
“…We kept the structure of the P.808 Toolkit the same; further details about the P.808 Toolkit can be found in [3] and details on validation of the ITU-T Rec. P.808 in [5].…”
Section: Methodsmentioning
confidence: 99%
“…We kept the structure of the P.808 Toolkit the same; further details about the P.808 Toolkit can be found in [3] and details on validation of the ITU-T Rec. P.808 in [5].…”
Section: Methodsmentioning
confidence: 99%
“…Further details about the P.808 Toolkit can be found in [11] and details on validation of the ITU-T Rec. P.808 in [16].…”
Section: Crowdsourcing Testmentioning
confidence: 99%
“…Further details about the P.808 Toolkit can be found in [15] and details on validation of the ITU-T Rec. P.808 in [20]. The test participant (marked by "You") is positioned in the center of the communication.…”
Section: Crowdsourcing Testmentioning
confidence: 99%
See 1 more Smart Citation
“…Annotating large datasets is a general problem in machine learning, especially in multimedia quality assessment where multiple recruiters are needed to annotate only one stimulus. Although annotations obtained through crowdsourcing can be as valid as a lab setting quality measures [4], [5], annotating data is still a costly and time-consuming operation.…”
Section: Introductionmentioning
confidence: 99%