2005
DOI: 10.1207/s15434311laq0202_2
|View full text |Cite
|
Sign up to set email alerts
|

Resolving Score Differences in the Rating of Writing Samples: Does Discussion Improve the Accuracy of Scores?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0
1

Year Published

2008
2008
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(32 citation statements)
references
References 27 publications
0
31
0
1
Order By: Relevance
“…We compared the nature of the novice discussion to that of the experts with particular regard to the themes that emerged. To increase reliability, two different researchers coded the transcripts and met to discuss their codes until 100% agreement was attained (following that described in Johnson, Penny, Gordon, Shumate, & Fisher, 2005). These researchers followed a similar protocol to interpret the results.…”
Section: Methodsmentioning
confidence: 99%
“…We compared the nature of the novice discussion to that of the experts with particular regard to the themes that emerged. To increase reliability, two different researchers coded the transcripts and met to discuss their codes until 100% agreement was attained (following that described in Johnson, Penny, Gordon, Shumate, & Fisher, 2005). These researchers followed a similar protocol to interpret the results.…”
Section: Methodsmentioning
confidence: 99%
“…Apply a two-step scoring process where the rater matches the composition to the closest benchmark and then scores it again if it does not match this benchmark perfectly by adding a plus or minus to the first score (Penny, Johnson, Gordon, 2000a, 2000bJohnson, Penny, Fisher, & Kuhs, 2003). ' Have raters discuss and resolve differences in their scores (Johnson, Penny, Gordon, Shumate, & Fisher, 2005). ' Combine the scores of two disagreeing raters with a third score provided by a more experienced rater (Johnson, Penny, & Gordon, 2000.…”
Section: Make High-stakes Writing Assessments Fairmentioning
confidence: 99%
“…Una de las críticas más frecuentes a los estudios que intentan medir las habilidades de escritura tiene relación con la confiabilidad de las evaluaciones, pues el lenguaje es siempre objeto de interpretación. Aparte de la calidad de la escritura, múltiples fuentes de error pueden contribuir a la variabilidad de los puntajes, entre ellas, diferencias de criterio entre correctores, ambigüedad de los criterios de corrección y variaciones de las condiciones en que ésta se realiza [15][16][17] .…”
Section: N V E S T I G a C I ó Nunclassified