Background: A critical issue in International English Language Testing System (IELTS) lies in the significance of the validity of IELTS listening comprehension test (hereafter IELTS LCM). However IELTS listening validity has been investigated, it has not been investigated with reference to multiple sources of evidence regarding item internal factors. To bridge this gap, we investigated its construct validity with use of structural equation modelling (SEM) and assessed differential item functioning (DIF) through cognitive diagnostic modelling (CDM) and Mantel Haenszel (MH). Methods: In this study, first, the participants signed a consent form for participation in the study; then, 480 participants were administered a proficiency test designed by the university of Cambridge; next, out of 480 participants, 463 participants were administered a 40-item IELTS LCT developed by the University of Cambridge. Finally, the data were analyzed with use of LISREL for probing the construct validity of the test; also, for detecting the potential DIF items, MH and CDM were used to make the results of DIF related findings more reliable. Results: The results of the first study confirmed an appropriate model fit, so that all four constructs, i.e., gap filling, diagram labelling, multiple choice and short answer on IELTS LCT, had a statistically significant contribution to IELTS LCT. However, construct-related evidence may not lead to the whole validity. This given, the second study examined the DIF items to argue the validity of IELTS LCT: MH detected 15 DIF items and CDM detected at least 6 DIF items and at most 12 DIF items. Conclusions: Due to its international nature and worldwide evaluative contribution, IELTS needs to have approximately (not absolutely) a stable factor structure, so that it should be invariant across populations and various cultures. More naturally, a test highly valid in one context might suffer from some degree of invalidity with some related constructs in another context. This in mind, our perspective in this research is not recommended to be taken as a one-size-fits-all model: Neither generalization nor claim is made based on the present study.
Background: Teachers' writing proficiency and writing assessment ability and their role in improving writing instruction in second language learning classrooms are issues that have not been investigated empirically and rigorously. To bridge the gap, we investigated the writing proficiency, writing assessment ability, and written corrective feedback beliefs and practices of Iranian English teachers who gave feedback on learners' writings.
This study was an attempt to assess how learners of English as a foreign language (EFL) improved their speaking fluency in a task-based language teaching (TBLT) approach used with ninth-grade learners at PUNIV-Cazenga, a high school in Luanda. In a case study design that used picture-description tasks, learners' speeches were audio recorded before and after the teaching, in which recasts and prompts were utilized as feedback tools for 8 weeks. The findings indicated that learners improved in terms of their speaking fluency by maximizing their speed of speech production, increasing grammatical accuracy, elaborating on their utterances, and developing interactional language. Furthermore, learners' opinions on being taught with the TBLT approach were sought, and the findings indicated that the learners felt encouraged to speak, believed in their potentials to use the target language, expanded their vocabulary, and recognized the relevance of the TBLT approach. The implications of the findings are discussed for teaching practice and future research.
Abstract-The study reports on the validity of IELTS Academic Writing Task One (IAWTO) and compares and assesses the performance descriptors, i.e., coherence and cohesion, lexical resource and grammatical range, employed on IAWTO and IELTS Academic Writing Task Two (IAWTT). To these objectives, the data used were 53 participants' responses to graphic prompts driven by IELTS scoring rubrics, descriptive prompt, and retrospective, rather than concurrent, think-aloud protocols for detecting the cognitive validity of responses. The results showed that IAWTO input was degenerate and insufficient, rendering the construct underrepresented, i.e., narrowing the construct. It was also found that IAWTO displayed to be in tune with cognitive difficulty of diagram analysis and the intelligence-based design of the process chart, rather than bar chart, being thus symmetrical with variances irrelevant to construct; this is argued to be biased to one group: Leading to under-performance of one group in marked contrast to over-performance of another group. Added to that, qualitative results established on instructors' protocols were suggestive of the dominance of performance descriptors on IAWTT rather than on IAWTO. The pedagogical implications of this study are further argued.
Language Learning Environments: Spatial Perspectives on SLA by Phil Benson, Multilingual Matters, 2021, 168pp., £24.95 (Paperback). ISBN 9781788924894.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.