2019
DOI: 10.1007/978-3-030-17798-0_40
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Recognition System for Dysarthric Speech Based on MFCC’s, PNCC’s, JITTER and SHIMMER Coefficients

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 8 publications
0
1
0
Order By: Relevance
“…In [25], the authors showed that voice quality features are relevant markers signaling paralinguistic information, and that they should even be considered as prosodic parameters along with pitch and duration, for instance. It has been demonstrated that prosodic information can increase the performance of automatic speech recognition systems, like in [26], where the authors built an ASR for dysarthric speech, or [27], where the authors applied jitter and shimmer for noisy speech recognition, both of them using HMM models. Also, a neural network approach with LSTMs was taken by [28], for acoustic emotion recognition task, however they did not perform ASR task on its own.…”
Section: Voice Featuresmentioning
confidence: 99%
“…In [25], the authors showed that voice quality features are relevant markers signaling paralinguistic information, and that they should even be considered as prosodic parameters along with pitch and duration, for instance. It has been demonstrated that prosodic information can increase the performance of automatic speech recognition systems, like in [26], where the authors built an ASR for dysarthric speech, or [27], where the authors applied jitter and shimmer for noisy speech recognition, both of them using HMM models. Also, a neural network approach with LSTMs was taken by [28], for acoustic emotion recognition task, however they did not perform ASR task on its own.…”
Section: Voice Featuresmentioning
confidence: 99%