Recent years have seen a growth in the use of language tests as measures of accountability within the education sector. In many countries, governmental institutions have promoted the involvement of teachers in language testing, providing training to boost the language assessment literacy (LAL) of teachers. This study aims to analyse the results of a large-scale effort to increase teachers’ LAL within the context of public language education for adults in Spain by shedding light on the scale and nature of teacher LAL, the impact of training as perceived by teachers, and their self-perceived further needs. Results show that, similarly to other countries in which teacher LAL has been studied, training in assessment is strongly influenced by contextual factors. Moreover, teachers perceive that this training has an impact not only on assessment-related tasks, but on their general teaching practice. Lastly, the findings reveal a significant correlation between the contents of assessment training courses and the teachers’ perception of further training needs. This could indicate that the more teachers learn about specific areas of language assessment, the more training in assessment they feel is needed, suggesting a gap in teachers’ awareness of their own LAL that materialises once training is provided.
Verbal interaction has been the subject of a growing interest among language professionals in Europe since the CEFR was published in 2001; in linguistics, verbal interaction has long been studied. In the Bakhtinian approach, it is even considered “the fundamental reality of language”. All types of interaction share the fact that they are dynamically co-constructed by participants. How then can we assess or certify interactional competence on an individual basis when dynamic instability prevails? What criteria can be defined in order to deconstruct interactional competence into specific operational criteria, if interaction is intrinsically multidimensional? These are the questions that we address in this paper. To do so, this paper presents the insights gained as a result of the co-operation between two certification systems: CertAcles (Spain) and CLES (France), both belonging to NULTE (Network of University Language Testers in Europe). These certification systems have agreed to collaborate extensively, sharing their constructs and assessment routines. As a result, CertAcles is shifting towards more contextualized tasks and CLES is considering adopting descriptive assessment scales for interaction (C1 level). We hope to demonstrate that the materialization of scientific collaboration of this kind can help improve individual systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.