The purpose of this study was to examine the effects of presenting a choice of writing tasks on the quality of essays produced by 11th-grade students. The effects of task choice were examined for interactions with the gender and race of students. Fifteen writing tasks, designed to elicit persuasive essays, were administered to 34,200 students in Georgia. Approximately half the students received an assigned task, whereas the other half were presented a choice of two tasks. A multivariate analysis of variance (MANOVA) was conducted using four domain scores as the dependent variables assessing writing quality and four independent variables (gender, race, writing tasks, and choice condition). The student characteristics of gender and race and the writing task variable had a significant effect in the MANOVA and all four univariate analyses. Female students wrote essays of higher quality than male students and White students wrote essays of higher quality than Black students. The choice condition had no substantive effect on the quality of essays. The writing task variable had a significant interaction with the other independent variables.Requests for reprints should be sent to Stephen Gabrielson, Georgia Assessment Project, Box 858,
The assessment of students' writing skills through essays is a common practice in educational institutions. Scoring of essays requires considerable judgment on the part of those who rate the response. When raters assign different scores to an essay, testing practitioners must resolve the discrepancy before computing an operational score to report to the examinee. This study investigated five forms of score resolution that were reported in a national survey of state department of education-testing agencies. The study examined the effect that each form of resolution has on the reliability of the resulting operational scores. It is shown that some methods of resolution can be associated with higher interrater reliability than can others. It is also shown that the choice of resolution can affect the magnitude of the reported score as well as the final passing rate of an assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.