2017
DOI: 10.1007/s40593-017-0143-2
|View full text |Cite
|
Sign up to set email alerts
|

Assessing Students’ Use of Evidence and Organization in Response-to-Text Writing: Using Natural Language Processing for Rubric-Based Automated Scoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 41 publications
(35 citation statements)
references
References 25 publications
0
33
0
Order By: Relevance
“…Text mining has been largely used to evaluate different aspects of essays automatically (Dikli, 2006;Rahimi, Litman, Correnti, Wang, & Matsumura, 2017). Some researchers analyzed shallow features and error detection (Burstein, 2003).…”
Section: Essaysmentioning
confidence: 99%
See 1 more Smart Citation
“…Text mining has been largely used to evaluate different aspects of essays automatically (Dikli, 2006;Rahimi, Litman, Correnti, Wang, & Matsumura, 2017). Some researchers analyzed shallow features and error detection (Burstein, 2003).…”
Section: Essaysmentioning
confidence: 99%
“…the analysis of different algorithms for identification of prejudiced phrases in essays (Rahimi et al, 2017). Essay NLP Evaluation This paper presents an investigation of score prediction based on natural language processing for two targeted constructs within analytic text-based writing (Balyan et al, 2017).…”
mentioning
confidence: 99%
“…We have developed several AES systems for RTA assessment (Rahimi et al 2017;Zhang and Litman 2017;). Our first model (denoted by Rubric) (Rahimi et al 2017) used NLP to represent an essay in terms of features that largely correspond to cells in the RTA Evidence rubric. This rubric, as well as the correspondence between the rubric and features that serve as input to the scoring model, are shown in Table 2.…”
Section: Aes Feature Extractionmentioning
confidence: 99%
“…While we have developed pilot data-driven methods that can extract such topical components automatically (Rahimi and Litman 2016), our methods need to be improved so that they do not degrade SG model performance. eRevise will also be enhanced to provide feedback for Organization, a second substantive RTA writing dimension for which we already have a pilot AES (Rahimi et al 2017). We also plan to move from feedback selection to more personalized feedback generation, and to create a teacher dashboard which can automatically generate summaries such as Figure 3a.…”
Section: Current and Future Directionsmentioning
confidence: 99%
“…The RTA was designed to evaluate writing skills in Analysis, Evidence, Organization, Style, and MUGS (Mechanics, Usage, Grammar, and Spelling) dimensions. To both score the RTA and provide formative feedback to students and teachers at scale, an automated RTA scoring tool is now being developed (Rahimi et al, 2017). This paper focuses on the Evidence dimension of the RTA, which evaluates students' ability to find and use evidence from an article to support their position.…”
Section: Introductionmentioning
confidence: 99%