2019
DOI: 10.1007/s40860-019-00085-y
|View full text |Cite
|
Sign up to set email alerts
|

On the impact of dysarthric speech on contemporary ASR cloud platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
21
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(22 citation statements)
references
References 18 publications
1
21
0
Order By: Relevance
“…One of the obvious barriers that some users with disabilities can encounter by interacting with voice assistants is related to speech impairments [ 13 ] that are a frequent secondary consequence of motor disorders [ 14 ]. Although most voice assistants exploit machine learning algorithms to adapt to the user and increase their speech recognition accuracy over time [ 15 , 16 ], these systems are still designed and developed for people with clear and intelligible speech.…”
Section: Introductionmentioning
confidence: 99%
“…One of the obvious barriers that some users with disabilities can encounter by interacting with voice assistants is related to speech impairments [ 13 ] that are a frequent secondary consequence of motor disorders [ 14 ]. Although most voice assistants exploit machine learning algorithms to adapt to the user and increase their speech recognition accuracy over time [ 15 , 16 ], these systems are still designed and developed for people with clear and intelligible speech.…”
Section: Introductionmentioning
confidence: 99%
“…Notably, this apparent data bias is not limited to commercial ASR systems, as Mozilla's open-source DeepSpeech system trained on their crowdsourced CommonVoice corpus also performs significantly worse for AAE and Indian English than Mainstream US English (Martin and Tang, 2020;Meyer et al, 2020). Other work has focused on the use of ASR as an assistive technology and found that most major systems perform poorly for Deaf and hard of hearing (Glasser, 2019), and dysarthric users (De Russis and Corno, 2019;Young and Mihailidis, 2010).…”
Section: Predictive Bias In Asrmentioning
confidence: 99%
“…One of the obvious barriers that some users with disabilities can encounter by interacting with voice assistants is related to speech impairments [13] that are a frequent secondary consequence of motor disorders [14]. Although most voice assistants exploit machine learning algorithms to adapt to the user and increase their speech recognition accuracy over time [15,16], these systems are still designed and developed for people with clear and intelligible speech.…”
Section: Introductionmentioning
confidence: 99%