Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications 2019
DOI: 10.18653/v1/w19-4452
|View full text |Cite
|
Sign up to set email alerts
|

Rubric Reliability and Annotation of Content and Argument in Source-Based Argument Essays

Abstract: We present a unique dataset of student sourcebased argument essays to facilitate research on the relations between content, argumentation skills, and assessment. Two classroom writing assignments were given to college students in a STEM major, accompanied by a carefully designed rubric. The paper presents a reliability study of the rubric, showing it to be highly reliable, and initial annotation on content and argumentation annotation of the essays.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 31 publications
1
3
0
Order By: Relevance
“…It could therefore be used by instructors to provide feedback on students' understanding of sources. The same features that have proved useful for automated analysis of argument in previous work [76] are shown to be the most predictive of the feature sets used here as well. In the context of freshman writing courses, especially for STEM students, Work on integrating automated assessment of argumentation and subject matter content is already in progress.…”
Section: Discussionsupporting
confidence: 53%
“…It could therefore be used by instructors to provide feedback on students' understanding of sources. The same features that have proved useful for automated analysis of argument in previous work [76] are shown to be the most predictive of the feature sets used here as well. In the context of freshman writing courses, especially for STEM students, Work on integrating automated assessment of argumentation and subject matter content is already in progress.…”
Section: Discussionsupporting
confidence: 53%
“…The results of the automated analysis of the student summaries show that the automated summary analysis performs well and could, therefore, be used by instructors to provide feedback on students' understanding of sources. The same features that have proved useful for automated analysis of argument in previous work [73] are shown to be the most predictive of the feature sets used here, in the context of undergraduate writing courses designed for STEM students. The work on integrating automated assessment of argumentation and subject matter content is already in progress.…”
Section: Discussionmentioning
confidence: 58%
“…In content-driven programs where subject specialists teach their subject matter, students’ compositions are predominantly evaluated on their content. Rubrics employed in these courses commonly entail multiple specific content-related criteria, without dividing linguistic features into minute elements as frequently seen in L2 writing assessment and language-driven CLIL (e.g., Bean & Melzer, 2021; Bukhari et al, 2021; Gao et al, 2019; Garza et al, 2021; Walvoord & Anderson, 2010). For example, the following nine criteria exist in a rubric used at two U.S. universities to assess first-year students’ compositions, including engagement with external sources, originality of the contention, clarity of the contention, effectiveness of supporting evidence, presentation of the writer’s own idea, organization, source use, language style, use of standard written English, and formatting (Walvoord & Anderson, 2010).…”
Section: Literature Reviewmentioning
confidence: 99%