This study investigates aspects of validity reflected in a large and diverse sample of published measures used in educational and psychological testing contexts. The current edition of Mental Measurements Yearbook served as the data source for this study. The validity aspects investigated included perspective on validity represented, number and kinds of sources of validity evidence provided, overall evaluation of the favorability of the test, and whether these factors varied as a function of the type of test. Findings reveal that validity information is not routinely provided in terms of modern validity theory, some sources of validity evidence (e.g., consequential) are essentially ignored in validity reports, and the favorability of judgments about a test is more strongly related to the number of validity sources provided than to the perspective on validity taken or other factors. The article concludes with implications for extending and refining current validity theory and validation practice.
This module describes some common standard‐setting procedures used to derive performance levels for achievement tests in education, licensure, and certification. Upon completing the module, readers will be able to: describe what standard setting is; understand why standard setting is necessary; recognize some of the purposes of standard setting; calculate cut scores using various methods; and identify elements to be considered when evaluating standard‐setting procedures. A self‐test and annotated bibliography are provided at the end of the module. Teaching aids to accompany the module are available through NCME.
The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified at 4 linguistic levels, and multiple computerized opera tionalizations were created for each of the 22 text characteristics. A researcher-devised text-complexity outcome measure was based on teacher judgment of text complexity in the 350 texts as well as on student judgment of text complexity as gauged by their responses in a maze task for a subset of the 350 texts. Analyses were conducted using a logical analytical progression typically used in machine-learning research. Random forest regression was the primary statistical modeling technique. Nine text character istics were most important for early-grades text complexity including word structure (decoding demand and number of syllables in words), word meaning (age of acquisition, abstractness, and word rareness), and sentence and discourse-level characteristics (intersentential complexity, phrase diversity, text density/information load, and noncompressibility). Notably, interplay among text characteristics was im portant to explanation of text complexity, particularly for subsets of texts.
The Common Core raises the stature of texts to new heights, creating a hubbub. The fuss is especially messy at the early grades, where children are expected to read more complex texts than in the past. But early-grades teachers have been given little actionable guidance about text complexity. The authors recently examined early-grades texts to discover what makes them complex and now report that there is a lot that can help teachers, specifically, young children’s texts are special, a handful of text characteristics can signal text-complexity level, sometimes the interplay of text characteristics modulates text-complexity level, and knowing why a text is complex can facilitate text selection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.