2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP) 2019
DOI: 10.1109/mlsp.2019.8918725
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Screening Of Children With Speech Sound Disorders Using Paralinguistic Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Long-term average spectra for each recording were inspected for possible noise artefacts and further cleaned if any were found (Olsen, 2018). Following Shahin et al (2019), we extracted the standardized voice feature set eGeMAPS with the open-source software OpenSmile (Eyben, 2015, p. 201; Eyben et al, 2010, 2016). The feature set contains 87 features, which are described in Appendix A2, Table S2.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Long-term average spectra for each recording were inspected for possible noise artefacts and further cleaned if any were found (Olsen, 2018). Following Shahin et al (2019), we extracted the standardized voice feature set eGeMAPS with the open-source software OpenSmile (Eyben, 2015, p. 201; Eyben et al, 2010, 2016). The feature set contains 87 features, which are described in Appendix A2, Table S2.…”
Section: Methodsmentioning
confidence: 99%
“…We focused on relatively simple models, as they tend to be less prone to overfitting, therefore, if we were to find generalizability concerns (i.e., inability to accurately identify autistic participants in new samples), such concerns would be even more relevant for models like neural networks, which are more complex and more likely to overfit. For our methodological model we were mainly inspired by Shahin et al (2019), a study based on SVM and rigorous cross-validation (CV), reaching high performance in predicting autism from voice. The authors reported an accuracy of 0.88, that is, 88% of the samples were correctly classified, and an F1 score of 0.90.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This study sets the methodological choices from Shahin et al (2019) in a highly conservative and fully reported pipeline, which relies on cross-validated training procedures and held-out testing sets, that is, it ensures the model is trained (fitted) on one subset of the data (the training dataset), while its performance is assessed on a different subset of the data (the held-out dataset). Figure 1 provides an overview of the pipeline, which is discussed in detail below.…”
Section: Pipelinementioning
confidence: 99%
“…Long-term average spectra for each recording were inspected for possible noise artifacts and further cleaned if any were found (Olsen, 2018). Following Shahin et al (2019), we extracted the standardized voice feature set eGeMAPS with the open-source software OpenSmile (Eyben, 2015, p. 201;Eyben et al, 2010Eyben et al, , 2016. The feature set contains 87 features, which are described in Appendix A2, Table S2.…”
Section: Preprocessingmentioning
confidence: 99%