2021
DOI: 10.1016/j.neunet.2021.02.008
|View full text |Cite
|
Sign up to set email alerts
|

Residual Neural Network precisely quantifies dysarthria severity-level based on short-duration speech segments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(22 citation statements)
references
References 61 publications
0
20
0
Order By: Relevance
“…As such, we conducted two comparative experiments based on both three-class and four-class tasks. We obtained 97.83% accuracy for the three-class and 97.19% for the four-class task compared to the maximum 98.90% accuracy reported in [19].…”
Section: Comparison With the State Of The Artmentioning
confidence: 59%
See 3 more Smart Citations
“…As such, we conducted two comparative experiments based on both three-class and four-class tasks. We obtained 97.83% accuracy for the three-class and 97.19% for the four-class task compared to the maximum 98.90% accuracy reported in [19].…”
Section: Comparison With the State Of The Artmentioning
confidence: 59%
“…Nevertheless, because of the held-out strategy we adopted, a direct comparison with those studies that did not consider unseen speakers is not informative. To verify how our identified optimal setup compares with those reported in the literature, we have conducted a final set of experiments adopting a similar evaluation strategy considered in [19]. This would provide more confidence that the presence of unseen speakers is responsible for the variations in performance, not the optimal setup.…”
Section: Comparison With the State Of The Artmentioning
confidence: 96%
See 2 more Smart Citations
“…Subsequently, several automated systems that use machine or deep learning have been developed with the aim of developing efficient tools to detect dysarthria. This approach includes automated measurement of acoustic analysis values in specific dysarthria [ 12 ], detection of disease using voice recordings [ 13 ], and assessment of severity level [ 14 ]. These methods depended on the extraction of acoustic features from speech utterances such as pitch and harmonics, shimmer, and jitter, followed by their classification using traditional machine learning methods such as Gaussian mixture model (GMM), hidden Markov model (HMM), and support vector machine (SVM).…”
Section: Introductionmentioning
confidence: 99%