This study addressed a need to examine and improve current assessments of listening comprehension (LC) of university EFL learners. These assessments adopted a traditional approach where test-takers listened to an audio recording of a spoken interaction and then independently responded to a set of questions. This static approach to assessment is at odds with the way teaching listening was carried out in the classroom, where LC tasks often involved some scaffolding. To address this limitation, a dynamic assessment (DA) of a listening test was proposed and investigated. DA involves mediation and meaning negotiation when responding to LC tasks and items. This paper described: (a) the local assessment context, (b) the relevance of DA in this context, and (c) the findings of an empirical study that examined the new and current LC assessments. Sixty Tunisian EFL students responded to a LC test with two parts, static and dynamic. The tests were scored by 11 raters. Both the test-takers and raters were interviewed about their views of the two assessments. Score analyses, using the Multi-Facet Rasch Measurement (MFRM) (FACETS program, version, 3.61.0), indicated that test-taker ability, rater behavior and item difficulty estimates varied across test types. Qualitative data analysis indicated that although the new assessment provided better insights into learners' cognitive and meta-cognitive processes than did the traditional assessment, raters were doubtful about the value of and processes involved in DA mainly because they were unfamiliar with it. The paper discussed the findings and their implications for listening assessment practices in this context and for theory and research on listening assessment.
The current study addressed the impact of computerized dynamic assessment (C-DA) on the TOEFL
i
BT listening comprehension test administered to Iranian EFL learners (
n
= 185) who took part in preparation courses on the TOEFL exam in some language centres in Iran. To mediate the test-takers with hints to process the listening questions, a computer software program was developed, and it was meant to produce the following: Actual, mediated, and learning potential scores. Findings of the study indicated that the actual and mediated scores led to significant different mean scores in various listening ability levels in almost all question types. Generally, results highlighted the significant positive impact of C-DA on improving EFL test-takers’ performances in the monologue and dialogue tasks. Teachers were recommended to implement C-DA since the information gained from this sociocultural assessment mode empowers them to provide learners with more individualized and accordingly more effective teaching and assessment strategies.
The study investigated the alignment process of the International English Language Competency Assessment (IELCA) suite examinations’ four levels, B1, B2, C1 and C2, onto the Common European Framework of Reference (CEFR) by explaining and discussing the five linking stages (Council of Europe (CoE 2009). Unlike previous studies, this study used the five linking stages altogether to make fair judgements and informed decisions about the practical consequences and validity arguments of this mapping task. Findings indicated that the useful and in-depth discussions of the relevant CEFR descriptors resulted in a deeper awareness of establishing succinct re-familiarisations and re-definitions of the salient features of the different skills and items, thus making them more specific to reflect the CEFR descriptors. The ample alignment activities provided fertile ground for dependable results. For instance, teacher estimates confirmed the cut scores with high agreement percentages, ranging from 74.4 to 99.34. Also, the FACETS analyses showed a good global model fit with a high reliability value of the judgement process, only after undergoing rater training sessions. Specifically, the majority of item difficulty estimates were within the typical range, thus indicating that the IELCA examinations were measuring the underlying construct traits; however, the empirical validation called for additional data and further implementation practices regarding other judgements on the levels’ boundary for IELCA examinations. Further mapping challenges, implications, and future research were also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.