2008
DOI: 10.1080/02602930701293181
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the reliability of self‐ and peer rating in student group work

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
34
1
2

Year Published

2008
2008
2014
2014

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(38 citation statements)
references
References 23 publications
1
34
1
2
Order By: Relevance
“…More recently, Cho, Schunn, and Wilson (2006) found that the aggregate of at least four peer assessments were as reliable and valid as teacher assessments, whereas the reliability and validity of single peer assessments were much lower -presumably because students evaluate a subset of all teacher assessments and as a consequence develop different evaluative perspectives that are reflected in rating variability. Irrespective of the findings on the reliability of peer ratings versus teacher ratings (Cho et al, 2006;Falchikov & Goldfinch, 2000;Magin, 2001;Stefani, 1994;Topping, 2003;Zhang, Johnston, & Kilic, 2008), similarity in peer and teacher ratings provides no information as to whether the ratings affect students' subsequent performance. It is implicitly assumed that the high degree of similarity between peer and teacher ratings reflects rating fairness and that student responses to peer ratings will be similar to their responses to teacher ratings.…”
Section: Eed For Functional Developmentmentioning
confidence: 99%
“…More recently, Cho, Schunn, and Wilson (2006) found that the aggregate of at least four peer assessments were as reliable and valid as teacher assessments, whereas the reliability and validity of single peer assessments were much lower -presumably because students evaluate a subset of all teacher assessments and as a consequence develop different evaluative perspectives that are reflected in rating variability. Irrespective of the findings on the reliability of peer ratings versus teacher ratings (Cho et al, 2006;Falchikov & Goldfinch, 2000;Magin, 2001;Stefani, 1994;Topping, 2003;Zhang, Johnston, & Kilic, 2008), similarity in peer and teacher ratings provides no information as to whether the ratings affect students' subsequent performance. It is implicitly assumed that the high degree of similarity between peer and teacher ratings reflects rating fairness and that student responses to peer ratings will be similar to their responses to teacher ratings.…”
Section: Eed For Functional Developmentmentioning
confidence: 99%
“…The reliability and validity of peer grading have been researched primarily in the context of face-to-face higher education (Cheng & Warren, 1999;Cho et al, 2006;Falchikov & Goldfinch, 2000;Stefani, 1994;Zhang, Johnston, & Kilic, 2008). Reliability is usually measured by the consistency of scores given by multiple student graders, and validity is commonly calculated as the correlation coefficient between student-assigned scores and instructor-assigned scores, assuming that instructors can provide fair and accurate grading results.…”
Section: B Reliability and Validitymentioning
confidence: 99%
“…Many stress the importance of creating an open dialogue with the students when explaining the scheme. Zhang and Johnston (2008) argue that student/teacher congruence is not necessary, since reliability of student ratings can be construed as consistent with each other. Concern has also been raised that the confidence in University assessment practices of 'external' communities, particularly those associated with employment of graduates for professional careers, may be negatively influenced but Langhan and Wheater (2003) dismiss this as of minor relevance if peer-assessment is integrated into a diverse portfolio of assessment, on the understanding that learning outcomes are being achieved.…”
Section: Potential Pitfallsmentioning
confidence: 99%