The Companion to Language Assessment 2013
DOI: 10.1002/9781118411360.wbcla124
|View full text |Cite
|
Sign up to set email alerts
|

Computer‐Automated Scoring of Written Responses

Abstract: This chapter begins by briefly discussing the human scoring procedures that preceded—and still operate parallel to—computer‐automated scoring (CAS) of written responses. The current conceptualization of the topic is approached by tracing the development of CAS in two areas: extended response tasks such as essays, and limited production tasks such as short answer questions. Limited production responses will be further divided based on the approach to scoring that is being used. This classification is important … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…They can only be a kind of extra assistance to the instructors but not rather a substitute. So, assessment and grading should be done by instructors rather than by software because writing is a human act, and there is no correlation between how human react and software (Carr, 2014;Koskey & Shermis, 2013;Landauer, 2003). Moreover, other rejections about OAF are that the feedback itself is vague and sometimes incomprehensible to the students (Hegelheimer, 2015;Lai, 2010;Tsuda, 2014).…”
Section: Previous Studies On Oafmentioning
confidence: 99%
“…They can only be a kind of extra assistance to the instructors but not rather a substitute. So, assessment and grading should be done by instructors rather than by software because writing is a human act, and there is no correlation between how human react and software (Carr, 2014;Koskey & Shermis, 2013;Landauer, 2003). Moreover, other rejections about OAF are that the feedback itself is vague and sometimes incomprehensible to the students (Hegelheimer, 2015;Lai, 2010;Tsuda, 2014).…”
Section: Previous Studies On Oafmentioning
confidence: 99%
“…Thirdly, more independent, transparent, and comparative research on the quality of automated evaluation engines is needed to assure test takers that they are assessed fairly. All the Chinese empirical studies to date have reported how well their automated evaluation systems predicted human scores; however, as Carr (2014) rightly pointed out, research on automated evaluation systems "has been conducted by the companies developing the systems, and…there is a marked lack of independent research comparing different systems head to head." This limitation also applies to the existing Chinese studies.…”
Section: The Futurementioning
confidence: 99%