Proceedings of the First ACM Conference on Learning @ Scale Conference 2014
DOI: 10.1145/2556325.2566238
|View full text |Cite
|
Sign up to set email alerts
|

Scaling short-answer grading by combining peer assessment with algorithmic scoring

Abstract: Peer assessment helps students reflect and exposes them to different ideas. It scales assessment and allows large online classes to use open-ended assignments. However, it requires students to spend significant time grading. How can we lower this grading burden while maintaining quality? This paper integrates peer and machine grading to preserve the robustness of peer assessment and lower grading burden. In the identify-verify pattern, a grading algorithm first predicts a student grade and estimates confidence… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 77 publications
(22 citation statements)
references
References 13 publications
(14 reference statements)
0
21
0
Order By: Relevance
“…Learning in a MOOC requires that students apply self regulation. While substantial research has been done on studying MOOC discussion forums (Ramesh et al, 2013;Brinton et al, 2013;Anderson et al, 2014;Sinha, 2014b), grading strategies for assignments (Tillmann et al, 2013;Kulkarni et al, 2014) and deployment of reputation systems (Coetzee et al, 2014), inner workings of students' interaction while watching MOOC video lectures have been much less focused upon. Given that roughly 5% (Huang et al, 2014) of students actually participate in MOOC discussion forums, it would be legitimate to ask whether choosing video lectures as units of analysis would be more insightful.…”
Section: Introductionmentioning
confidence: 99%
“…Learning in a MOOC requires that students apply self regulation. While substantial research has been done on studying MOOC discussion forums (Ramesh et al, 2013;Brinton et al, 2013;Anderson et al, 2014;Sinha, 2014b), grading strategies for assignments (Tillmann et al, 2013;Kulkarni et al, 2014) and deployment of reputation systems (Coetzee et al, 2014), inner workings of students' interaction while watching MOOC video lectures have been much less focused upon. Given that roughly 5% (Huang et al, 2014) of students actually participate in MOOC discussion forums, it would be legitimate to ask whether choosing video lectures as units of analysis would be more insightful.…”
Section: Introductionmentioning
confidence: 99%
“…Disaggregation can be an important tool: summing individual scores for components of good writing (e.g. grammar and argumentation) can capture the overall quality of an essay more accurately than asking for a single writing score [9,24]. Therefore, PeerStudio asks for individual judgments with yes/no or scale questions, and not aggregate scores.…”
Section: Related Workmentioning
confidence: 99%
“…When rubric cell descriptions are complex, novice raters can develop mental models that stray significantly from the rubric standard, even if it is shown prominently [24]. To mitigate the challenges of multi-attribute matching, PeerStudio asks instructors to list multiple distinct criteria of quality along each dimension ( Figure 4).…”
Section: Rubricsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, we introduce the identify-verify assessment pattern, where a machine-grading algorithm estimates ambiguity in student answers and determines the number of peers who rate the work [12]. Peers identify key features of the answer using a staff-provided rubric, and verify each other's assessment.…”
Section: Machine Learning Reduces Peers' Busyworkmentioning
confidence: 99%