2021
DOI: 10.31219/osf.io/gcetv
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Validity of Automated Text Evaluation Tools for Written-Expression Curriculum-Based Measurement: A Comparison Study

Abstract: Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated essay scoring as well as written expression curriculum-based measurement (WE-CBM) to determine whether an automated approach improves technical adequacy. A sample of 140 fourth grade students generated writing samples that were then scored using traditional and … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 27 publications
0
1
0
Order By: Relevance
“…In WE-CBM, scoring primarily considers text production and accuracy, whereas in aLPA, we use a wide range of word-, sentence-, and discourse-level indices provided by automated text evaluation software to generate overall writing quality scores. Second, based on research demonstrating that automated text evaluation can generate writing quality scores that are useful for screening (Keller-Margulis, Mercer, & Matta, 2021;Mercer, Keller-Margulis, Faith, Reid, & Ochs, 2019;Wilson, 2018), we also anticipate that computer-based assessment will be necessary for writing samples to be scored and for such a system to be feasibly used by teachers. Third, given that we know that multiple, longer-duration writing samples will be necessary for reliability (Keller-Margulis et al, 2016), we anticipate that a reduced test frequency will be optimal, compared to typical CBM progress monitoring procedures of weekly assessments.…”
Section: Automated Learning Progress Assessment In Written Expressionmentioning
confidence: 99%
“…In WE-CBM, scoring primarily considers text production and accuracy, whereas in aLPA, we use a wide range of word-, sentence-, and discourse-level indices provided by automated text evaluation software to generate overall writing quality scores. Second, based on research demonstrating that automated text evaluation can generate writing quality scores that are useful for screening (Keller-Margulis, Mercer, & Matta, 2021;Mercer, Keller-Margulis, Faith, Reid, & Ochs, 2019;Wilson, 2018), we also anticipate that computer-based assessment will be necessary for writing samples to be scored and for such a system to be feasibly used by teachers. Third, given that we know that multiple, longer-duration writing samples will be necessary for reliability (Keller-Margulis et al, 2016), we anticipate that a reduced test frequency will be optimal, compared to typical CBM progress monitoring procedures of weekly assessments.…”
Section: Automated Learning Progress Assessment In Written Expressionmentioning
confidence: 99%