This study explores the attitudes of raters of English speaking tests towards the global spread of English and the challenges in rating speakers of Indian English in descriptive speaking tasks. The claims put forward by language attitude studies indicate a validity issue in English speaking tests: listeners tend to hold negative attitudes towards speakers of non-standard English, and judge them unfavorably. As there are no adequate measures of listener/rater attitude towards emerging varieties of English in language assessment research, a Rater Attitude Instrument comprising a three-phase self-measure was developed. It comprises 11 semantic differential scale items and 31 Likert scale items representing three attitude dimensions of feeling, cognition, and behavior tendency as claimed by psychologists. Confirmatory factor analysis supported a two-factor structure with acceptable model fit indices. This measure represents a new initiative to examine raters' psychological traits as a source of validity evidence in English speaking tests to strengthen arguments about test-takers' English language proficiency in response to the change of sociolinguistic landscape. The implications for norm selection in English oral tests are also discussed. Theoretical background World EnglishesNew lines of sociolinguistic research have acknowledged the pluricentricity of English such as world Englishes (WE) and English as lingua franca (ELF). The Kachru-led line of WE documents the function, status, linguistic maturity, and legitimacy of the emerging Unlike the constructs being conceptualized in the field of psychology, recent studies on rater attitude towards WE have revealed mixed and inconclusive findings. Kim (2005) examined the language backgrounds of raters, their attitudes toward WE, and how they scored the speech performance of six Korean students on the Test of Spoken English (TSE) picture description task, using holistic and analytic scales. Although their ratings on the holistic scales were fairly similar, their different attitudes towards WE significantly affected their analytic ratings on grammar, rate of speech, and task fulfillment, with those labeled as "positive" giving more lenient ratings.Chalhoub-Deville and Wigglesworth (2005) investigated the rating performance by raters from the inner circle countries, including Australia, Canada, the UK and the US, and found no significant difference in evaluating ESL test-takers' speaking performance.
Background: A strong interest in researching World Englishes (WE) in relation to language assessment has become an emerging theme in language assessment studies over the past two decades. While research on WE has highlighted the status, function, and legitimacy of varieties of English language, it remains unclear how raters respond to the results of the global spread of English. Also unclear is whether their attitudes towards the varieties of English constitute a biasing factor in the scores they award in English speaking tests. As such, this study investigates the relationship between rater attitudes towards Indian English as an example of WE, as measured by the "rater attitude instrument" (RAI), and scores that raters awarded to IELTS speech samples produced by Indian examinees. Methods: A total of 96 teacher raters rated six IELTS speech samples and then completed the RAI online. Correlation analysis, MANOVA, and Tukey contrasts were performed to test the extent to which rater attitudes towards Indian English as an example of WE affect rater scoring decisions on IELTS speech samples. Results: Moderate to strong correlations were observed between the RAI and IELTS speech sample scores. The MANOVA results suggest significantly different ratings, with the positive attitude group consistently awarding higher scores to IELTS speech samples in comparison to the negative attitude group on all of the four analytic rating criteria. Furthermore, the RAI appears to be a significant predictor of IELTS speech sample scores. Conclusion: A link between rater attitude towards Indian English, as an example of WE, and scoring tendency for Indian examinees may exist in a language assessment context. Thus, as raters reoriented their views, broadened their grasp of WE, and as awareness of WE increased in the language testing community in recent decades, the findings here show that testing agencies must add an understanding of potential rater bias towards WE to the current relevant literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.