Background: This study aims to develop a test for assessing pragmatic comprehension ability in Chinese as a second language (L2). Following the framework of an argument-based approach to test validation, this study attempts to obtain backing for the Evaluation and Explanation inferences. Methods: Test items were developed based on two sources of authentic language use (i.e., field notes and a corpus of natural language use). Following a series of piloting, 107 examinees of L2 Chinese completed the test (k = 39) in the main study. Among them, nine examinees had retrospective interviews that probed the knowledge, strategies and processes involved in completing the test. Results: The assumption underlying the Evaluation inference was supported by satisfactory statistical characteristics of the test (e.g., item/test difficulty, item discrimination, distractor functioning, and item/person fit); moreover, the two assumptions associated with the Explanation inference were backed by quantitative and qualitative evidence demonstrating that variations in test performance were attributable to the targeted construct of pragmatic comprehension ability. Conclusion: The test appears to be a reliable instrument for assessing pragmatic comprehension ability in L2 Chinese. The test results can be used to inform decisionmaking on curriculum development for interested Chinese programs.
This study compared holistic and analytic marking methods for their effects on parameter estimation (of examinees, raters, and items) and rater cognition in assessing speech act production in L2 Chinese. Seventy American learners of Chinese completed an oral Discourse Completion Test assessing requests and refusals. Four native Chinese raters evaluated the examinees’ oral productions using two four-point rating scales. The holistic scale simultaneously included the following five dimensions: communicative function, prosody, fluency, appropriateness, and grammaticality; the analytic scale included sub-scales to examine each of the five dimensions. The raters scored the dataset twice with the two marking methods, respectively, and with counterbalanced order. They also verbalized their scoring rationale after performing each rating. Results revealed that both marking methods led to high reliability and produced scores with high correlation; however, analytic marking possessed better assessment quality in terms of higher reliability and measurement precision, higher percentages of Rasch model fit for examinees and items, and more balanced reference to rating criteria among raters during the scoring process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.