2012
DOI: 10.1080/02602938.2010.507299
|View full text |Cite
|
Sign up to set email alerts
|

RateMyProfessors.com offers biased evaluations

Abstract: RateMyProfessors.com (RMP) is becoming an increasingly popular tool among students, faculty and school administrators. The validity of RMP is a point of debate; many would argue that self-selection bias obscures the usefulness of RMP evaluations. In order to test this possibility, we collected three types of evaluations: RMP evaluations that existed at the beginning of our study, traditional in-class evaluations and RMP evaluations that were prompted after we collected in-class evaluations. We found difference… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
22
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(23 citation statements)
references
References 18 publications
1
22
0
Order By: Relevance
“…However, research on this subject shows that this is not the case, and that these ideas represent misconceptions about the online evaluation of courses (University of Saskatchewan, n.d.). In fact, according to several studies, the judgements made by students about courses through online evaluations appear to be similar to those made using paper-based evaluations, and this despite a lower participation rate (Gamliel & Davidovitz, 2005;Legg & Wilson, 2012;Nowell et al, 2010;Stowell et al, 2012; University of Saskatchewan, n.d.; Venette, Sellnow, & McIntyre, 2010).…”
Section: Course Evaluation Scoresmentioning
confidence: 87%
“…However, research on this subject shows that this is not the case, and that these ideas represent misconceptions about the online evaluation of courses (University of Saskatchewan, n.d.). In fact, according to several studies, the judgements made by students about courses through online evaluations appear to be similar to those made using paper-based evaluations, and this despite a lower participation rate (Gamliel & Davidovitz, 2005;Legg & Wilson, 2012;Nowell et al, 2010;Stowell et al, 2012; University of Saskatchewan, n.d.; Venette, Sellnow, & McIntyre, 2010).…”
Section: Course Evaluation Scoresmentioning
confidence: 87%
“…Some have found RMP ratings to be biased (Clayson, 2014;Legg and Wilson, 2012), whereas others have found them to be valid measures of student learning (Otto et al, 2008). A common concern of reliability and validity is that the characteristics of those self-elect individuals who provide RMP evaluations could be different from all students (Legg and Wilson, 2012) and evaluations may be influenced by other factors such as race, gender, attractiveness and easiness of the instructor than quality of teaching (Felton et al, 2004(Felton et al, , 2008Leung et al, 2013;Reid, 2010;Stuber et al, 2009). Students motivated to post their evaluations could possibly have extreme viewpoints about the professor (Peterson et al, 2011).…”
Section: Literature Reviewmentioning
confidence: 99%
“…Studying teaching review sites: participatory or commodified agency While most studies of popular teaching evaluation sites focus on their validity, reliability and bias (Kindred and Mohammed 2006;Coladarci and Kornfield 2007;Helterbran 2008;Davison and Price 2009;Sonntag, Bassett, and Snyder 2009;Reid 2010;Legg and Wilson 2012), few recent studies have explored, even partially, the sociocultural meanings of RMP as a popular phenomenon (e.g. Reagan 2009;Ritter 2008;Chaney 2011;Gregory 2012).…”
Section: Introductionmentioning
confidence: 99%