2009
DOI: 10.1080/02602930802079463
|View full text |Cite
|
Sign up to set email alerts
|

An empirical test of the validity of student evaluations of teaching made on RateMyProfessors.com

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
23
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(25 citation statements)
references
References 6 publications
2
23
0
Order By: Relevance
“…Despite these criticisms, studies that show RMP's rankings have a high correlation with markers that are more widely accepted as measures of faculty performance are starting to appear, all based on university-wide statistics. For example, Sonntag, Bassett, and Snyder (2008) examine the rankings of 126 professors at Lander University and compared the two systems. The major findings were that student rankings on the ease of courses were consistent in both systems and correlated with grades, and that professors' rankings for ''clarity" and ''helpfulness" on RateMyProfessors.com correlated with overall rankings for course excellence from the official evaluations.…”
Section: About Ratemyprofessorscommentioning
confidence: 99%
“…Despite these criticisms, studies that show RMP's rankings have a high correlation with markers that are more widely accepted as measures of faculty performance are starting to appear, all based on university-wide statistics. For example, Sonntag, Bassett, and Snyder (2008) examine the rankings of 126 professors at Lander University and compared the two systems. The major findings were that student rankings on the ease of courses were consistent in both systems and correlated with grades, and that professors' rankings for ''clarity" and ''helpfulness" on RateMyProfessors.com correlated with overall rankings for course excellence from the official evaluations.…”
Section: About Ratemyprofessorscommentioning
confidence: 99%
“…While restricting our data set to a single university limits the wider applicability of our results, all currently existing studies in this area limit their data set in some way. The seminal work of Hamermesh and Parker (2005) on the impact of attractiveness on teacher evaluations uses data from the University of Texas at Austin, while multiple studies discussing RMP data follow the faculty at the authors' own institutions (Lawson and Stephenson, 2005;Langbein, 2008;Sonntag et al, 2009). Other studies limit their analysis to faculty in certain academic fields (e.g.…”
Section: Empirical Comparison Of Approachesmentioning
confidence: 99%
“…Studying teaching review sites: participatory or commodified agency While most studies of popular teaching evaluation sites focus on their validity, reliability and bias (Kindred and Mohammed 2006;Coladarci and Kornfield 2007;Helterbran 2008;Davison and Price 2009;Sonntag, Bassett, and Snyder 2009;Reid 2010;Legg and Wilson 2012), few recent studies have explored, even partially, the sociocultural meanings of RMP as a popular phenomenon (e.g. Reagan 2009;Ritter 2008;Chaney 2011;Gregory 2012).…”
Section: Introductionmentioning
confidence: 98%
“…In comparison to the quantitative and instrumental nature of the existing literature on RMP (e.g. Felton, Mitchell, and Stinson 2004;Kindred and Mohammed 2006;Sonntag, Bassett, and Snyder 2009;Lewandowski, Higgins, and Nardone 2012), the present study is qualitative in nature as it aims to explore the cultural implications of RMP's reviewing and rating practices. The study analyses RMP as a symptomatic example of the emerging rating subjectivity in the digital reputation society.…”
Section: Introductionmentioning
confidence: 99%