Why and when do people disagree on their conceptions or prototypes of social categories? In 6 studies, it was revealed that such differences tend to be self-serving. Ss tended to endorse self-descriptive attributes as central to their prototypes of desirable social concepts and emphasize features that were not self-descriptive in their conceptions of undesirable categories. Such disagreements were constrained to attributes potentially central to the domain in question and did not occur for clearly peripheral features. Self-serving differences in prototype structure were exhibited in social information processing tasks and led to disagreements in judgments of others. Potential mechanisms underlying the development of these egocentric cognitive structures and their implications for self-serving judgments of ability are discussed.
Local assessment systems are being marketed as formative, benchmark, predictive, and a host of other terms. Many so‐called formative assessments are not at all similar to the types of assessments and strategies studied by Black and Wiliam (1998) but instead are interim assessments. In this article, we clarify the definition and uses of interim assessments and argue that they can be an important piece of a comprehensive assessment system that includes formative, interim, and summative assessments. Interim assessments are given on a larger scale than formative assessments, have less flexibility, and are aggregated to the school or district level to help inform policy. Interim assessments are driven by their purpose, which fall into the categories of instructional, evaluative, or predictive. Our intent is to provide a specific definition for these “interim assessments” and to develop a framework that district and state leaders can use to evaluate these systems for purchase or development. The discussion lays out some concerns with the current state of these assessments as well as hopes for future directions and suggestions for further research.
There has been much discussion recently about why the percentage of students scoring Proficient or above varies as much as it does on state assessments across the country. However, most of these discussions center on the leniency or rigor of the cut score. Yet, the cut score is developed in a standard‐setting process that depends heavily on the definition for each level of performance. Good performance‐level descriptors (PLDs) can be the foundation of an assessment program, driving everything from item development to cut scores to reporting. PLDs should be written using a multistep process. First, policymakers determine the number and names of the levels. Next, they develop policy definitions specifying the level of rigor intended by each level, regardless of the grade or subject to which it is applied. Finally, content experts and education leaders should supplement these policy definitions with specific statements related to the content standards for each assessment. This article describes a process for developing PLDs, contrasts that with current state practice, and discusses the implication for interpreting the word “proficient,” which is the keystone of No Child Left Behind.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.