Proceedings of the 27th ACM International Conference on Information and Knowledge Management 2018
DOI: 10.1145/3269206.3271755
|View full text |Cite
|
Sign up to set email alerts
|

Creating Scoring Rubric from Representative Student Answers for Improved Short Answer Grading

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…Grading based on the similarity between student answers and one or more example answers has also been investigated by Marvaniya et al (2018) and Mohler et al (2011).…”
Section: Automated Gradingmentioning
confidence: 99%
“…Grading based on the similarity between student answers and one or more example answers has also been investigated by Marvaniya et al (2018) and Mohler et al (2011).…”
Section: Automated Gradingmentioning
confidence: 99%
“…In the early stages, deep learning was usually involved in ASAG tasks only as a complement to feature-based methods, where deep learning provides only some features, but not all, for the ASAG task. For example, Marvaniya et al [5] and Saha et al [2] used a pre-trained Bi-LSTM network InferSent [19] to obtain sentence embedding vectors of answer texts, which compensates for the disadvantage of not being able to represent the context in token overlap methods. Tan et al [20] proposed a scoring method using a combination of Graph Convolutional Networks (GCNs) with several sparse features.…”
Section: Related Studiesmentioning
confidence: 99%
“…𝑆 𝑐 = 𝑅𝑒𝐿𝑈(𝐶𝑁𝑁(𝑆)) ∈ ℝ 𝑑 𝐶 ×𝑛 (5) where 𝐶𝑁𝑁(⋅) denotes the Siamese CNN with zero padding and shared parameters, 𝑑 𝑐 is the number of convolution kernels, and 𝑅𝑒𝐿𝑈(⋅) denotes the Rectified Linear Unit activation function.…”
Section: Convolutional Layermentioning
confidence: 99%
See 1 more Smart Citation
“…The challenge of automatically grading short answers was first posed a few decades ago. Earlier ASAG approaches consisted of clustering similar answers (Basu et al, 2013;Zehner et al, 2016), utilizing hand-crafted rules, schemes and ideal answer models (Leacock and Chodorow, 2003;Willis, 2015), or combining manually engineered features with various machine learning models (Marvaniya et al, 2018;Mohler et al, 2011;Saha et al, 2018;Sahu and Bhowmick, 2020;Sultan et al, 2016). Please refer to one of the comprehensive surveys of this field for a more in-depth elaboration of these approaches (Burrows et al, 2015;Galhardi and Brancher, 2018;Roy et al, 2015).…”
Section: Automatic Short Answer Gradingmentioning
confidence: 99%