2016
DOI: 10.1080/01443410.2015.1136407
|View full text |Cite
|
Sign up to set email alerts
|

Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation

Abstract: An increasing number of studies on the use of tools for automated writing evaluation (AWE) in writing classrooms suggest growing interest in their potential for formative assessment. As with all assessments, these applications should be validated in terms of their intended interpretations and uses. A recent argument-based validation framework outlined inferences that require backing to support integration of one AWE tool, Criterion, into a college-level English as a Second Language (ESL) writing course. The pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
40
0
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 100 publications
(60 citation statements)
references
References 37 publications
(49 reference statements)
1
40
0
1
Order By: Relevance
“…First developed in the 1960s, automated systems for assessing student writing have primarily been used to score student work (Link, Dursun, Karakaya, & Hegelheimer, 2014). The last decade has seen the emergence of automated writing evaluation (AWE) tools which not only assess writing, but provide students with formative feedback on language components such as grammar and structure (Chapelle, Cotos, & Lee, 2015;Link et al, 2014;Ranalli, Link, & Chukharev-Hudilainen, 2017). Feedback generated by AWE systems is instant and specific to individual student submissions, and generally focuses on diagnosing sentence-level errors in language mechanisms.…”
Section: Student Response Systems (Srs)mentioning
confidence: 99%
See 1 more Smart Citation
“…First developed in the 1960s, automated systems for assessing student writing have primarily been used to score student work (Link, Dursun, Karakaya, & Hegelheimer, 2014). The last decade has seen the emergence of automated writing evaluation (AWE) tools which not only assess writing, but provide students with formative feedback on language components such as grammar and structure (Chapelle, Cotos, & Lee, 2015;Link et al, 2014;Ranalli, Link, & Chukharev-Hudilainen, 2017). Feedback generated by AWE systems is instant and specific to individual student submissions, and generally focuses on diagnosing sentence-level errors in language mechanisms.…”
Section: Student Response Systems (Srs)mentioning
confidence: 99%
“…However, AWE tools aimed at providing feedback on discourse characteristics, such as components of an introduction, have also been developed (Chapelle et al, 2015). Recent research relating to AWE tools has largely emerged from language disciplines, particularly English as a second or foreign language, and indeed marketing of AWE tools has increasingly targeted language disciplines (Bai & Hu, 2017;Ranalli et al, 2017). It is suggested that AWE tools can support educators by providing feedback on sentence mechanics, enabling educators to address higher-level writing components such as content and audience awareness (Ranalli et al, 2017).…”
Section: Student Response Systems (Srs)mentioning
confidence: 99%
“…While there is evidence suggesting that existing tools can be helpful [12][13][14][15][16][17][18], this paper describes a demo that takes a left turn. Specifically, this paper presents a novel idea for writing feedback with revision activities.…”
Section: Related Workmentioning
confidence: 99%
“…A follow-on investigation to this study conducted by myself and two colleagues (Ranalli, Link, and Chukharev-Hudilainen 2016) shows how explicit arguments can serve the purpose of decision-making and help in communicating among stakeholders. I undertook this study in my dual capacity as both a researcher and a coordinator for the ESL writing program in which the Chapelle, Cotos, and Lee (2015) study was conducted.…”
Section: Explicit Arguments In Call-related Validation Studiesmentioning
confidence: 83%