2010
DOI: 10.1177/0265532210364643
|View full text |Cite
|
Sign up to set email alerts
|

Automated scoring and feedback systems: Where are we and where are we heading?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
67
0
5

Year Published

2013
2013
2017
2017

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 107 publications
(74 citation statements)
references
References 20 publications
2
67
0
5
Order By: Relevance
“…It would be more conclusive if approached through systematic evidence relevant to the interpretations, uses, and consequences of AWE, compiled under an argument-driven validity framework Xi, 2010). For example, propose a conceptual model for validation of AWE use in local settings, intended as a heuristic for writing program administrators.…”
Section: Need For Empirical Evidencementioning
confidence: 99%
“…It would be more conclusive if approached through systematic evidence relevant to the interpretations, uses, and consequences of AWE, compiled under an argument-driven validity framework Xi, 2010). For example, propose a conceptual model for validation of AWE use in local settings, intended as a heuristic for writing program administrators.…”
Section: Need For Empirical Evidencementioning
confidence: 99%
“…As Galaczi (2010) reminds us, a key concept in language testing is "fitness for purpose": "Tests are not just valid, they are valid for a specific purpose, and as such, different test formats have different applicability for different contexts, age groups, proficiency levels, and score-user requirements" (p. 26). In the same vein, numerous researchers highlight the importance of establishing construct validity with reference to the inferences and decisions made on the basis of test scores (Bernstein et al, 2010;Galaczi, 2010;Xi, 2010). As Bernstein et al (2010) point out, computer test scores provide one piece of evidence about a candidate's performance but should not be necessarily used as the only basis of decisionmaking.…”
Section: Resultsmentioning
confidence: 99%
“…Unlike automated writing assessment, ASE involves an additional layer of complexity in that the test takers' oral output must first be recognized before it can be evaluated (Xi, 2010a). Despite ongoing research and recent advancements in automated speech recognition (ASR), these technologies are not robust at recognizing non-native accented speech because most ASR-based systems have been designed for a narrow range of native speech patterns.…”
Section: Research On Automated Assessmentmentioning
confidence: 99%
“…Other ASE systems (e.g., SpeechRater developed by ETS) compensate for this limitation with free speech recognition by expanding the speaking construct to include pronunciation, vocabulary, and grammar, in addition to fluency (Xi, Higgins, Zechner, & Williamson, 2008). According to Xi (2010a), currently neither of these approaches "has successfully tackled the problem of under-or misrepresentation of the construct of speaking proficiency in either the test tasks used or the automated scoring methodologies, or both" (p. 294).…”
Section: Research On Automated Assessmentmentioning
confidence: 99%