This study identifies a unique context for exploring lay understandings of language testing, and by extension for characterizing the nature of language assessment literacy among non-practitioners, stemming from data in an inquiry into the registration processes and support for overseas trained doctors by the Australian House of Representatives Standing Committee on Health and Ageing. The data come from Hansard transcripts of public hearings of the inquiry. Sections of the data related to 2001; Inbar-Lourie, 2008;Stoynoff & Chapelle, 2005). However, Taylor (2009) has pointed out that LAL must extend beyond the teaching professions, as the stakeholders in language testing are diverse, and include many groups not traditionally associated directly with a need for knowledge of assessment, including: personnel in both existing and newly established and emerging national examination boards, academics and students engaged in language testing research, language teachers or instructors, advisors and decision makers in language planning and education policy, parents, politicians and the general public. (p. 25) One problem, though, in attempting to increase the assessment literacy of such diverse groups is knowing what sort of information would meet the needs of different stakeholders, and allow good decisions to be made about tests and test scores. It is to be expected, for example, that the assessment literacy needs of practitioners (that is, teachers, academics and students engaged in language testing research, test designers, school principals, etc.) might be quite different from those of test takers themselves, policy makers, and the greater public. Different levels of expertise or specialization will require different levels of literacy, and different needs will dictate the type of knowledge most useful for stakeholders (see also Brindley, 2001, pp. 128-129; Taylor, 2009, p. 27).Regarding what might be expected of non-practitioner stakeholders, Bracey's (2000) booklet, Thinking about tests and testing: A short primer in 'assessment literacy', provides an interesting example. Written for a general audience, this short publication covers a range of testing terminology, including essential statistical terms, and some important issues in testing (such as 'who develops tests' and 'what agencies oversee the proper use of tests'). Others, such as Newton (2005), focus on the need for policy makers, in particular, to understand specific psychometric phenomena, such as measurement error, in order to understand better how test scores can be used and misused.In contrast to the literature on assessment literacy for educators, however, there is a distinct paucity of information concerning precisely what level of assessment literacy 'non-practitioners' should be
While scholars have proposed different models of language assessment literacy (LAL), these models have mostly comprised prescribed sets of components based on principles of good practice. As such, these models remain theoretical in nature, and represent the perspectives of language assessment researchers rather than stakeholders themselves. The project from which the current study is drawn was designed to address this issue through an empirical investigation of the LAL needs of different stakeholder groups. Central to this aim was the development of a rigorous and comprehensive survey which would illuminate the dimensionality of LAL and generate profiles of needs across these dimensions. This paper reports on the development of an instrument designed for this purpose: the Language Assessment Literacy Survey. We first describe the expert review and pretesting stages of survey development. Then we report on the results of an exploratory factor analysis based on data from a large-scale administration (N = 1086), where respondents from a range of stakeholder groups across the world judged the LAL needs of their peers. Finally, selected results from the large-scale administration are presented to illustrate the survey's utility, specifically comparing the responses of language teachers, language testing/assessment developers and language testing/ assessment researchers.
This paper reports on an investigation of the potential for a shared-L1 advantage on an academic English listening test featuring speakers with L2 accents. Two hundred and twelve second-language listeners (including 70 Mandarin Chinese L1 listeners and 60 Japanese L1 listeners) completed three versions of the University Test of English as a Second Language (UTESL) listening subtest which featured an Australian English-accented speaker, a Japanese-accented speaker and a Mandarin Chinese-accented speaker. Differential item functioning (DIF) analyses were conducted on data from the tests which featured L2-accented speakers using two methods of DIF detectionthe standardization procedure and the Mantel-Haenszel procedure -with candidates matched for ability on the test featuring the Australian English-accented speaker. Findings showed that Japanese L1 listeners were advantaged on a small number of items on the test featuring the Japanese-accented speaker, but these were balanced by items which favoured non-Japanese L1 listeners. By contrast, Mandarin Chinese L1 listeners were clearly advantaged across several items on the test featuring a Mandarin Chinese L1 speaker. The implications of these findings for claims of bias are discussed with reference to the role of speaker accent in the listening construct.
Alderson, Brunfaut and Harding (2014) recently investigated how diagnosis is practised across a range of professions in order to develop a tentative framework for a theory of diagnosis in second or foreign language (SFL) assessment. In articulating this framework, a set of five broad principles were proposed, encompassing the entire enterprise of diagnostic assessment. However, there remain questions about how best to implement these principles in practice, particularly in identifying learners' strengths and weaknesses in the less well-documented areas of SFL reading and listening. In this paper, we elaborate on the set of principles by first outlining the stages of a diagnostic process built on these principles, and then discussing the implications of this process for the diagnostic assessment of reading and listening. In doing so, we will not only elaborate on the theory of diagnosis with respect to its application in the assessment of these skills, but also discuss the ways in which each construct might be defined and operationalized for diagnostic purposes.
Diagnostic language assessment has received increased research interest in recent years, with particular attention on methods through which diagnostic information can be gleaned from standardized proficiency tests. However, diagnostic procedures in the broader sense have been inadequately theorized to date, with the result that there is still little agreement on precisely what diagnosis in second and foreign language learning actually entails. In order to address this problem, this article investigated how diagnosis is theorized and carried out in a diverse range of professions with a view to finding commonalities that can be applied to the context of language assessment. Ten semi-structured interviews were conducted with professionals from the fields of car mechanics, IT systems support, medicine, psychology and education. Data were then coded, yielding five macro-categories that fit the entire data set: (i) definitions of diagnosis, (ii) means of diagnosis, (iii) key players, (iv) diagnostic procedures, (v) treatment/ follow-up. Based on findings within these categories, a set of five tentative principles of diagnostic language assessment is drawn-up, as well as a list of implications for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.