This study investigates the consistency between human raters and an automated essay scoring system in grading high school students' English compositions. A total of 923 essays from 23 classes of 12 senior high schools in Taiwan (Republic of China) were obtained and scored manually and electronically. The results show that the consistency between human raters is significantly higher than the consistency between human raters and the automated essay scoring system. To discover whether the students were confident with the automated essay grading system, a questionnaire was also distributed. The results indicate the participants realize their vocabulary level was inadequate, and they wanted to know the scores the automated essay grading system gave them along with the generated comments regarding their compositions.
Current research on second language (L2) anxiety solely deals with the vague fears. Those research results do not reflect L2 learners' real concerns or furthermore help them to reduce the "tension" rather than anxiety. The researcher considers the need to distinguish L2 writing tension from L2 writing anxiety. Furthermore, this study attempts to infuse the pragmatic aspect by adding two categories of questions related to actual situations and classroom activities to the Foreign Language Writing Anxiety Questionnaire (Tsai, 2012). The results of the Bivarited correlation tests show both the inter-category and intra-category reach the significant level at .05 or better. Thus, the New Foreign Language Writing Anxiety Questionnaire (NFLWAQ, Appendix 1) is formed. Notably, the L2 writing tension in this study is significantly higher than the foreign language writing anxiety in the overall group as well as every individual group at the significant level of .05 or better. The results indicate that the participants worry about real situations and classroom activities more than the vague fears from nowhere. The peer review activity is recognized as the least pressure source that L2 writing teachers might want to practice it from time to time to reduce students' tension.
Who is the most preferred and deemed the most helpful reviewer in improving student writing? This study exerciseda blended teaching method which consists of three currently prevailing reviewers: the automated grading system(AGS, a web-based method), the peer review (a process-oriented approach), and the teacher grading technique (theproduct-oriented approach) in a Writing (IV) class involving 22 technological sophomore students of ModernLanguages Department. The questionnaire results indicated the participants preferred the teacher as the reviewer totheir peers followed by the automated grading system and considered the teacher the most effective in helping theirwriting. Three L2 teachers including one native speaker of English reviewed an essay which was the only and themost inconsistent case between a human rater and a machine rater in the study (2.3 vs. 3.6). This case surfaced anessential problem that the automated grading system couldn’t detect and correct expressions transferred from L1.Data also revealed that teachers without training, their grammatical error identification rates are respectively 82.9%,31.4% and 74.3%. After training, student reviewers could detect and correct from 70.2 to 79.3 percent of grammarerrors on average.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.