2020
DOI: 10.1109/tlt.2019.2897997
|View full text |Cite
|
Sign up to set email alerts
|

Feature Engineering and Ensemble-Based Approach for Improving Automatic Short-Answer Grading Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 28 publications
0
19
0
Order By: Relevance
“…Among their conclusions is that automatic feedback is being used more in structured questions that require well-defined answers. These questions include multiple-choice [18], fill-in-the-blank [19], or those with a solution presented in a structured language, i.e., a mathematical formula [20] or a program [21,22]. The main positive effects of automatic feedback include the students using the feedback for improvement [23], increased student engagement [24,25], and reduction of instructor bias [26].…”
Section: Related Workmentioning
confidence: 99%
“…Among their conclusions is that automatic feedback is being used more in structured questions that require well-defined answers. These questions include multiple-choice [18], fill-in-the-blank [19], or those with a solution presented in a structured language, i.e., a mathematical formula [20] or a program [21,22]. The main positive effects of automatic feedback include the students using the feedback for improvement [23], increased student engagement [24,25], and reduction of instructor bias [26].…”
Section: Related Workmentioning
confidence: 99%
“…In general, there are two approaches used to automatically evaluate short answers. The first approach uses a supervised method (Roy et al, 2016;Sultan et al, 2016;Sahu and Bhowmick, 2020) that extracts features in the short answers. The second approach uses a variety of unsupervised methods to determine scores based on the distance between the learner responses and the answer model (Bin et al, 2008;Mohler and Mihalcea, 2009;Hasanah et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…The second approach uses a variety of unsupervised methods to determine scores based on the distance between the learner responses and the answer model (Bin et al, 2008;Mohler and Mihalcea, 2009;Hasanah et al, 2018). Sahu and Bhowmick (2020) conducted a comparative study of different features and regression models to improve ASAG. A set of text similarity features, such as knowledgebased measures, corpus-based features, and word-embedding features, were extracted for each pair of learner response and reference answer.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations