2022
DOI: 10.1016/j.asw.2022.100667
|View full text |Cite
|
Sign up to set email alerts
|

The persuasive essays for rating, selecting, and understanding argumentative and discourse elements (PERSUADE) corpus 1.0

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…The PERSUADE 2.0 corpus originally consists of over 25,000 argumentative essays written by students from 6th to 12th grade in the US for 15 different prompts (Crossley et al 2022). The holistic essay scores, which serve as the ground truth for this study's predictions, were assigned by human raters who underwent training on a scoring rubric employed in the standardized Scholastic Aptitude Test (SAT) in the US.…”
Section: Methods Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…The PERSUADE 2.0 corpus originally consists of over 25,000 argumentative essays written by students from 6th to 12th grade in the US for 15 different prompts (Crossley et al 2022). The holistic essay scores, which serve as the ground truth for this study's predictions, were assigned by human raters who underwent training on a scoring rubric employed in the standardized Scholastic Aptitude Test (SAT) in the US.…”
Section: Methods Datasetsmentioning
confidence: 99%
“…All these demographic attributes were considered to address RQ2. The details of the dataset are described in (Crossley et al 2023).…”
Section: Methods Datasetsmentioning
confidence: 99%
“…Technology can support feedback generation using automated writing evaluation (AWE; Ngo et al, 2022). Current AWE systems are based on corpora of essays to single writing tasks hand-scored by raters for specific elements related to writing, for example, holistic scores of writing quality (Shermis, 2014) or argumentative elements (Crossley, Baffour, Tian, Picou, Benner & Boser, 2022). These "training data", consisting of texts and human ratings, are then analyzed with machine learning algorithms, which can recognize patterns in the training data and evaluate new texts on the same task based on these patterns (see Ercikan & McCaffrey, 2022, for a description of the strengths and weaknesses of the process).…”
Section: Generating Feedback With Aimentioning
confidence: 99%
“…Although work on medical text has continued, relatively little attention has been devoted to automatic deidentification of educational data. Currently, most educational researchers use human annotators to label PII before releasing educational data sets (Crossley et al, 2022(Crossley et al, , 2023Megyesi et al, 2018). However, the success of deidentification systems in the medical domain suggests that it may be possible to develop automated text deidentification systems for other contexts as well.…”
Section: Approaches To Text Deidentificationmentioning
confidence: 99%