2020
DOI: 10.1609/aaai.v34i09.7062
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Data Augmentation Strategies for Improving Performance on Automatic Short Answer Scoring

Abstract: Automatic short answer scoring (ASAS) is a research subject of intelligent education, which is a hot field of natural language understanding. Many experiments have confirmed that the ASAS system is not good enough, because its performance is limited by the training data. Focusing on the problem, we propose MDA-ASAS, multiple data augmentation strategies for improving performance on automatic short answer scoring. MDA-ASAS is designed to learn language representation enhanced by data augmentation strategies, wh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
16
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(21 citation statements)
references
References 9 publications
1
16
0
1
Order By: Relevance
“…This tendency is consistent with previous Comparing the conventional DNN-AES models shows that the LSTM-based model with MoT pooling has higher performance than models with last pooling, which is also consistent with previous studies (Alikaniotis et al, 2016;Riordan et al, 2017). BERT tends to outperform the LSTM-based models, as in other BERT applications including automated short-answer grading (Devlin et al, 2019;Lun et al, 2020;Sung et al, 2019). As Dasgupta et al (2018) reported, the conventional hybrid model shows the highest average accuracy among the conventional models.…”
Section: Resultssupporting
confidence: 90%
“…This tendency is consistent with previous Comparing the conventional DNN-AES models shows that the LSTM-based model with MoT pooling has higher performance than models with last pooling, which is also consistent with previous studies (Alikaniotis et al, 2016;Riordan et al, 2017). BERT tends to outperform the LSTM-based models, as in other BERT applications including automated short-answer grading (Devlin et al, 2019;Lun et al, 2020;Sung et al, 2019). As Dasgupta et al (2018) reported, the conventional hybrid model shows the highest average accuracy among the conventional models.…”
Section: Resultssupporting
confidence: 90%
“…Jiaqi Lun et al (2020) proposed an automatic short answer scoring with BERT. In this with a reference answer comparing student responses and assigning scores.…”
Section: Neural Network Modelsmentioning
confidence: 99%
“…Additionally, some approaches consider the question (Lv et al, 2021), student models (Zhang et al, 2020b) or results from True/False questions posed in the same assessment (Uto and Uchida, 2020). Transformer-based approaches are also noteworthy here (Sung et al, 2019;Ghavidel et al, 2020;Lun et al, 2020;Camus and Filighera, 2020). They achieve high performances on the SemEval short answer grading benchmark dataset (Dzikovska et al, 2013).…”
Section: Automatic Short Answer Gradingmentioning
confidence: 99%