This study investigates the validity of assessing L2 pragmatics in interaction using mixed methods, focusing on the evaluation inference. Open role-plays that are meaningful and relevant to the stakeholders in an English for Academic Purposes context were developed for classroom assessment. For meaningful score interpretations and accurate evaluations of interaction-involved pragmatic performances, interaction-sensitive data-driven rating criteria were developed, based on the qualitative analyses of examinees' role-play performances. The conversation analysis performed on the data revealed various pragmatic and interactional features indicative of differing levels of pragmatic competence in interaction. The FACETS analysis indicated that the role-plays stably differentiated between the varying degrees of the 102 examinees' pragmatic abilities. The raters showed internal consistency despite their differing degrees of severity. Stable fit statistics and distinct difficulties were reported within each of the interaction-sensitive rating criteria. The findings served as backing for the evaluation inference in the validity argument. Finally, implications of the findings in operationalizing interaction-involved language performances and developing rating criteria are discussed.
This study explicates cognitive validity of task-based L2 pragmatic speaking assessment by examining reported strategy use of test takers at varying performance levels across different task types. Thirty university-level ESL learners completed four pragmatic speaking tasks that differ in the formality of pragmatic actions. Two trained raters scored the task-based pragmatic performances using analytical rating criteria and displayed a satisfactory level of consistency and accuracy in scoring the performances. The test-takers’ retrospective reports were transcribed and analyzed to develop a valid coding scheme that consists of cognitive, metacognitive, and pragmatic strategies. An association between the test-takers’ pragmatic performances scored by the trained raters and their reported strategy use was examined. The higher-ability test takers utilized diverse strategies more frequently, ranging from varied pragmatic strategies to strategies specifically related to managing task demands, compared to the lower-ability test takers. Further, the test takers utilized distinct types of strategies appropriate to handling unique pragmatic task situations and complexities involved in each pragmatic assessment task. These findings explain how the test takers cognitively interacted with the assessment tasks and what strategies potentially led to successful pragmatic performances. The implications of examining pragmatic strategy use were discussed in terms of advancing practices of teaching and assessing L2 pragmatics.
This qualitative study reports an investigation of the nature of interactional competence at various levels of achievement in the context of role-play speaking assessment. The focal point of this study is on how examinees jointly accomplish the interactional work involved in proposal sequences in role-play interaction. Based on a conversation analysis of a corpus of role-play interaction, I argue that distinct sequential organizations and interactional features found across examinees’ levels serve as critical validity evidence for assessing interactional competence. Various shift markers and stepwise transitions were present in higher-level examinees when they initiated and shifted actions in role-play interaction. However, lower-level examinees’ opening turns were typically forwarded without establishing a shared understanding relevant to an upcoming action. When the examinees responded to various proposal sequences, coherent and sufficient topic organizations were recurrent in higher-level performances. The examinees, regardless of levels, managed to close the role-play interaction well. I discuss the implications of the demonstrated link between the recurrent interactional features and examinees’ interactional competence for future research into speaking assessment and teaching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.