Although fluency is an important subconstruct of language proficiency, it has not received as much attention in L2 writing research as complexity and accuracy have, in part due to the lack of methodological approaches for the analysis of large datasets of writing-process data. This article presents a method of time-aligned keystroke logging and eye-tracking and reports an empirical study investigating L2 writing fluency through this method. Twenty-four undergraduate students at a private university in Turkey performed two writing tasks delivered through a web text editor with embedded keystroke logging and eye-tracking capabilities. Linear mixed-effects models were fit to predict indices of pausing and reading behaviors based on language status (L1 vs. L2) and linguistic context factors. Findings revealed differences between pausing and eye-fixation behavior in L1 and L2 writing processes. The article concludes by discussing the affordances of the proposed method from the theoretical and practical standpoints.
Even though current technologies allow for automated feedback, evaluating content and generating discourse-specific feedback is still a challenge for automated systems, which explains the gap in research investigating the effect of such feedback. This study explores the impact of automated formative feedback on the improvement of English as a second language (ESL) learners’ written causal explanations within two cause-and-effect essays and across pre- and post-tests. Pre- and post-test drafts, feedback reports for first and revised drafts from the automated writing evaluation system, and screen-capturing videos collected from 31 students enrolled in two sections of an advanced-low-level academic writing class were analyzed through descriptive statistics and the Wilcoxon signed-rank test. Findings revealed statistically significant changes in learners’ causal explanations within one cause-and-effect essay while no significant improvement was observed across pre- and post-tests. The findings of this study offer not only insights into how to further improve automated discourse-specific feedback but also pedagogical implications for better learning outcomes.
Automated writing evaluation (AWE) technologies are common supplementary tools for helping students improve their language accuracy using automated feedback. In most existing studies, AWE has been implemented as a class activity or an assignment requirement in English or academic writing classes. The potential of AWE as a voluntary language learning tool is unknown. This study reports on the voluntary use of Criterion by English as a foreign language students in two content courses for two assignments. We investigated (a) to what extent students used Criterion and (b) to what extent their revisions based on automated feedback increased the accuracy of their writing from the first submitted draft to the last in both assignments. We analyzed students’ performance summary reports from Criterion using descriptive statistics and non-parametric statistical tests. The findings showed that not all students used Criterion or resubmitted a revised draft. However, the findings also showed that engagement with automated feedback significantly reduced users’ errors from the first draft to the last in 11 error categories in total for the two assignments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.