7th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2017) 2017
DOI: 10.21437/slate.2017-18
|View full text |Cite
|
Sign up to set email alerts
|

Deep-Learning Based Automatic Spontaneous Speech Assessment in a Data-Driven Approach for the 2017 SLaTE CALL Shared Challenge

Abstract: This paper presents a deep-learning based assessment method of a spoken computer-assisted language learning (CALL) for a non-native child speaker, which is performed in a data-driven approach rather than in a rule-based approach. Especially, we focus on the spoken CALL assessment of the 2017 SLaTE challenge. To this end, the proposed method consists of four main steps: speech recognition, meaning feature extraction, grammar feature extraction, and deep-learning based assessment. At first, speech recognition is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 8 publications
0
10
0
Order By: Relevance
“…For each test sentence formed by NW words, of which NOOV are out-of-vocabulary (OOV), we compute the following 5 features using each LM (taking inspiration for features from the works of [29,30,12,9,10]): a) log(P ) N W , that is, the average log-probability of the sentence, b) log(P OOV ) N OOV , that is, the average contribution of OOV words to the log-probability of the sentence, c) log(P )−log(P OOV ) N W , that is, the average log-difference between the two above probabilities, d) NW − N bo , where N bo is the number of back-offs applied by the LM to the input sentence (this difference is related to the frequency of n-grams in the sentence that have also been observed in the training set), e) NOOV , the number of OOVs in the sentence. Note that if word counts NW or NOOV are equal to zero (i.e.…”
Section: Classification Featuresmentioning
confidence: 99%
“…For each test sentence formed by NW words, of which NOOV are out-of-vocabulary (OOV), we compute the following 5 features using each LM (taking inspiration for features from the works of [29,30,12,9,10]): a) log(P ) N W , that is, the average log-probability of the sentence, b) log(P OOV ) N OOV , that is, the average contribution of OOV words to the log-probability of the sentence, c) log(P )−log(P OOV ) N W , that is, the average log-difference between the two above probabilities, d) NW − N bo , where N bo is the number of back-offs applied by the LM to the input sentence (this difference is related to the frequency of n-grams in the sentence that have also been observed in the training set), e) NOOV , the number of OOVs in the sentence. Note that if word counts NW or NOOV are equal to zero (i.e.…”
Section: Classification Featuresmentioning
confidence: 99%
“…This new data was selected in a similar way to the first training set, to be balanced and representative of the collected data, with the additional constraint that there should be no overlap of individual students between the first task and second task. Speech data were processed through the two best speech recognisers from the first shared task [6,8] after which the two sets of output transcriptions were merged and cleaned up by transcribers at the University of Geneva.…”
Section: Datamentioning
confidence: 99%
“…The cleaned, merged transcriptions were processed through four of the best assessment systems from the first shared task [6,8,7,9] to give accept/reject decisions for the language criterion. The training data could then be divided into three groups according to the agreement among the four systems.…”
Section: Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Several types of machine learning models have been used in the first edition of this challenge, for example, Support Vector Machine (SVM), K-Nearest Neighbor models, and Feed-Forward Neural networks [9,15,16].…”
Section: Machine Learning Modelsmentioning
confidence: 99%