2005
DOI: 10.1111/j.0083-2919.2005.00419.x
|View full text |Cite
|
Sign up to set email alerts
|

Rater judgment and English language speaking proficiency

Abstract: The paper investigates whether there is a shared perception of speaking proficiency among raters from different English speaking countries. More specifically, this study examines whether there is a significant difference among English language learning (ELL) teachers, residing in Australia, Canada, the UK, and the USA when rating speech samples of international English language students. Teachers were asked to rate samples from international students who took the test of spoken English (TSE), the oral componen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
15
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 6 publications
3
15
0
Order By: Relevance
“…The findings of the study are in accordance with the literature which suggests that the constructirrelevant factors can influence the assessment of the raters and the scores of the test-takers in oral interviews (e.g., Chalhoub-Deville & Wigglesworth, 2005;Myford & Wolfe, 2000;O'Loughlin, 2002;O'Sullivan, 2000;Winke & Gass, 2012;Winke et al, 2011). Several factors that affect raters' scorings in oral interviews have been studied in the literature; however, to the knowledge of the researchers, no study has been conducted to investigate the effects of the raters' prior knowledge of the students' proficiency levels on their scoring behaviors during proficiency exams oral interviews.…”
Section: Resultssupporting
confidence: 89%
See 2 more Smart Citations
“…The findings of the study are in accordance with the literature which suggests that the constructirrelevant factors can influence the assessment of the raters and the scores of the test-takers in oral interviews (e.g., Chalhoub-Deville & Wigglesworth, 2005;Myford & Wolfe, 2000;O'Loughlin, 2002;O'Sullivan, 2000;Winke & Gass, 2012;Winke et al, 2011). Several factors that affect raters' scorings in oral interviews have been studied in the literature; however, to the knowledge of the researchers, no study has been conducted to investigate the effects of the raters' prior knowledge of the students' proficiency levels on their scoring behaviors during proficiency exams oral interviews.…”
Section: Resultssupporting
confidence: 89%
“…In conclusion, the findings of the present study concur with the previous studies by confirming that raters may be affected by factors other than the actual performance of the test-takers (e.g., Chalhoub-Deville, 1995;Chalhoub-Deville & Wigglesworth, 2005;Lumley & McNamara, 1995;Myford & Wolfe, 2000;Winke & Gass, 2012). Whether random or systematic, similar to the other studies, measurement error was observed in this study underlining the influential factors that may cause disagreement within and/or among the raters' judgments in oral performance assessments.…”
Section: Discussionsupporting
confidence: 91%
See 1 more Smart Citation
“…First, quantitative analyses of individual rater differences have mainly been conducted by using Multifaceted Rasch Analysis (MFRA) on harshness, consistency, and perception of item difficulty (Brown, 1995;Kim, 2009;Winke, Gass, & Myford, 2013;Zhang & Elder, 2011). Then group differences have been explored by using t-tests (Brown, 1995;Fayer & Krasinski, 1987;Winke et al, 2013), chi-square statistics (Brown, 1995;Zhang & Elder, 2011), or Multivariate Analysis of Variance (MANOVA) (Chalhoub-Deville & Wigglesworth, 2005). Other techniques, such as G-theory and logistic regression analyses, were also used by previous studies (Carey et al, 2011;Xi & Mollaun, 2011).…”
Section: Introductionmentioning
confidence: 99%
“…Questions about bias on the part of raters towards the varieties of English in the world today are arising in the relevant discussions (Davies et al 2003). Studies have investigated the impact of rater nationality on speaking test scores (Chalhoub-Deville and Wigglesworth 2005;Hamp-Lyons and Zhang 2001), differences in scores due to rater attitudes towards Korean English (Kim 2005), and the recent development of a "rater attitude instrument" that measures raters' attitudes towards WE (Hsu 2016). The emerging agenda on rater psychological traits and attitude-behavior relationship includes broad concerns about the impact of WE on English speaking test scores, score validity (Davies et al 2003), fairness (Kunnan 2004), and unexpected consequences of test use (Davidson 2006).…”
Section: Introductionmentioning
confidence: 99%