)test development efforts. As part of the foundation for the development of the next generation TOEFL test, papers and research reports were commissioned from experts within the fields of measurement, language teaching, and testing through the TOEFL 2000 project. The resulting critical reviews, expert opinions, and research results have helped to inform TOEFL program development efforts with respect to test construct, test user needs, and test delivery. Opinions expressed in these papers are those of the authors and do not necessarily reflect the views or intentions of the TOEFL program.These monographs are also of general scholarly interest, and the TOEFL program is pleased to make them available to colleagues in the fields of language teaching and testing and international student admissions in higher education.The TOEFL 2000 project was a broad effort under which language testing at Educational Testing Service ® (ETS ® ) would evolve into the 21st century. As a first step, the TOEFL program revised the Test of Spoken English™ (TSE ® ) and introduced a computer-based version of the TOEFL test. The revised TSE test, introduced in July 1995, is based on an underlying construct of communicative language ability and represents a process approach to test validation. The computer-based TOEFL test, introduced in 1998, took advantage of new forms of assessment and improved services made possible by computer-based testing, while also moving the program toward its longer-range goals, which included:• the development of a conceptual framework that takes into account models of communicative competence a research program that informs and supports this emerging framework a better understanding of the kinds of information test users need and want from the TOEFL test a better understanding of the technological capabilities for delivery of TOEFL tests into the next century Monographs 16 through 20 were the working papers that laid out the TOEFL 2000 conceptual frameworks with their accompanying research agendas. The initial framework document, Monograph 16, described the process by which the project was to move from identifying the test domain to building an empirically based interpretation of test scores. The subsequent framework documents, Monographs 17-20, extended the conceptual frameworks to the domains of reading, writing, listening, and speaking (both as independent and interdependent domains). These conceptual frameworks guided the research and prototyping studies described in subsequent monographs that resulted in the final test model. The culmination of the TOEFL 2000 project is the next generation TOEFL test that will be released in September 2005.As TOEFL 2000 projects are completed, monographs and research reports will continue to be released and public review of project work invited. TOEFL Program Educational Testing Service iii AbstractThis report documents two coordinated exploratory studies into the nature of oral English-foracademic-purposes (EAP) proficiency. Study I used verbal-report methodology to examine...
Speaking tasks involving peer-to-peer candidate interaction are increasingly being incorporated into language proficiency assessments, in both large-scale international testing contexts, and in smaller-scale, for example course-related, ones. This growth in the popularity and use of paired and group orals has stimulated research, particularly into the types of discourse produced and the possible impact of candidate background factors on performance. However, despite the fact that the strongest argument for the validity of peer-to-peer assessment lies in the claim that such tasks allow for the assessment of a broader range of interactional skills than the more traditional interview-format tests do, there is surprisingly little research into the judgments that are made of such performances. The fact that raters, and rating criteria, are in a crucial mediating position between output and outcomes, warrants investigation into how raters construe the interaction in these tasks. Such investigations have the potential to inform the development of interaction-based rating scales and ensure that validity claims are moved beyond the content level to the construct level. This paper reports the findings of a verbal protocol study of teacher-raters viewing the paired test discourse of 17 beginner dyads in a university-based Spanish as a foreign language course. The findings indicate that the raters identified three interaction parameters: non-verbal interpersonal communication, interactive listening, and interactional management. The findings have implications for our understanding of the construct of effective interaction in paired candidate speaking tests, and for the development of appropriate rating scales.
Whilst claims to validity for conversational oral interviews as measures of nontest conversational skills are based largely on the unpredictable or impromptu nature of the test interaction, ironically this very feature is also likely to lead to a lack of standardisation across interviews, and hence potential unfairness. This article addresses the question of variation amongst interviewers in the ways they elicit demonstrations of communicative ability and the impact of this variation on candidate performance and, hence, raters’ perceptions of candidate ability. Through a discourse analysis of two interviews involving the same candidate with two different interviewers, it illustrates how intimately the interviewer is implicated in the construction of candidate proficiency. The interviewers differed with respect to the ways in which they structured sequences of topical talk, their questioning techniques, and the type of feedback they provided. An analysis of verbal reports produced by some of the raters confirmed that these differences resulted in different impressions of the candidate’s ability: in one interview the candidate was considered to be more ‘effective’ and ‘willing’ as a communicator than in the other. The paper concludes with a discussion of the implications for rater training and test design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.