Raters are central to writing performance assessment, and rater development-training, experience, and expertise-involves a temporal dimension. However, few studies have examined new and experienced raters' rating performance longitudinally over multiple time points. This study uses operational data from the writing section of the MELAB (n = 20,662 ratings), an international exam of English proficiency, to investigate the rating quality of new and experienced raters over three time periods of 12 to 21 months. Rating quality was operationalized in terms of rater severity and consistency, and estimates of those modeled using multi-facet Rasch methodology. Results indicate that, within one particular rating context, (1) novice raters, where initially differing in performance, learn to rate appropriately relatively quickly, (2) raters are able to maintain rating quality over time, and (3) rating volume and rating quality may be related. Implications for rater preparation, rater certification, and the notion of expert rater are discussed.
Language performance assessments typically require human raters, introducing possible error. In international examinations of English proficiency, rater language background is an especially salient factor that needs to be considered. The existence of rater language background-related bias in writing performance assessment is the object of this study. Data for this study are ratings assigned by Michigan English Language Assessment Battery (MELAB) raters to compositions written by examinees of various language backgrounds. While most of the raters are native speakers of English, four have first languages other than English: two Spanish, one Korean, and one bilingual speaker of Filipino and Chinese (Amoy). Examinees were divided into 21 language groups. The IRT application FACETS was used to estimate and control for rater severity when calculating the amount of bias reflected by each rater’s set of ratings for each language/language group. Results show that the magnitude of bias terms for all raters for all language groups was minimal, thus having little effect on examinee scores, and that there is no pattern of language-related bias in the ratings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.