2014
DOI: 10.1007/978-3-319-13296-9_10
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Methods for Improving Review Quality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Future research could also consider whether an intervention can enhance communication quality in GVTs. In this regard, two initiatives that seem promising for increasing communication quality, particularly that which occurs within the peer feedback context, might be calibration training (see Gehringer, 2014) and double loop mutual assessments (see Babik, Iyer, & Ford, 2012).…”
Section: Strengths Limitations and Additional Directions For Future R...mentioning
confidence: 99%
“…Future research could also consider whether an intervention can enhance communication quality in GVTs. In this regard, two initiatives that seem promising for increasing communication quality, particularly that which occurs within the peer feedback context, might be calibration training (see Gehringer, 2014) and double loop mutual assessments (see Babik, Iyer, & Ford, 2012).…”
Section: Strengths Limitations and Additional Directions For Future R...mentioning
confidence: 99%
“…Peer-grading as a specific aspect of peer-assessment is especially challenging when it comes to teaching at scale. The situation was ameliorated with the introduction of tools like CrowdGrader 3 (Dasgupta & Ghosh, 2013;De Alfaro & Shavlovsky, 2014) and various massively open online courses (MOOCs) platforms (Gehringer, 2014). De Alfaro and Shavlovsky (2014) found that the grades computed by CrowdGrader were precise and suitable for student homework evaluation.…”
Section: Peer Assessment As Grading Methodsmentioning
confidence: 99%
“…We explored existing literature to conduct a systematic analysis of the multitude, variety, and complexity of such implementations, functionalities, and design choices in OPRA. Several attempts have been made to survey computerized peerassessment practices (Bouzidi & Jaillet, 2009;Chang et al, 2021;Davies, 2000;Doiron, 2003;Gikandi et al, 2011;Luxton-Reilly, 2009;Tenório et al, 2016;Topping, 2005), or some specific aspects of peer assessment, such as approaches to reliability and validity of peer evaluations (Gehringer, 2014;Misiejuk & Wasson, 2021;Patchan et al, 2017). However, meta-analysis of OPRA systems is complicated because their design space has high dimensionality; OPRA practices and designs vary across many disciplines in many different ways (Søndergaard & Mulder, 2012).…”
Section: Literature Reviewmentioning
confidence: 99%