Over the past decade or so, a growing number of writers have argued that cognitive science and psychometrics could be combined in the service of instruction. Researchers have progressed beyond statements of intent to the hands-on business of researching and developing diagnostic assessments combining cognitive science and psychometrics, what I call cognitively diagnostic assessment (CDA). In this article, I attempt to organize the many loosely connected efforts to develop cognitively diagnostic assessments. I consider the development of assessments to guide specific instructional decisions, sometimes referred to as diagnostic assessments. Many of my arguments apply to program evaluation as well—assessments that reveal the mechanisms test takers use in responding to items or tasks provide important information on whether instruction is achieving its goals. My goal in this article is to characterize CDA in terms of the intended use of assessment and the methods of developing and evaluating assessments. Towards this goal, I (a) outline the societal trends that motivate the development of CDA, (b) introduce a framework within which the psychological and statistical aspects of CDA can be coordinated, and (c) summarize efforts to develop CDA in a five-step methodology that can guide future development efforts. Finally, I address some of the issues developers of CDA must resolve if CDA is to succeed.
Research-based learning in a teaching environment is an effective way to help bring the excitement and experience of independent bench research to a large number of students. The program described here is the second of a two-semester biochemistry laboratory series. Here, students are empowered to design, execute and analyze their own experiments for the entire semester. This style of laboratory replaces a variety of shorter labs in favor of an in depth research-based learning experience. The concept is to allow students to function in independent research groups. The research projects are focused on a series of wild-type and mutant clones of malate dehydrogenase. A common research theme for the laboratory helps instructors administer the course and is key to delivering a research opportunity to a large number of students. The outcome of this research-based learning laboratory results in students who are much more confident and skilled in critical areas in biochemistry and molecular biology. Students with research experience have significantly higher confidence and motivation than those students without a previous research experience. We have also found that all students performed better in advanced courses and in the workplace.
Assessments labeled as formative have been offered as a means to improve student achievement. But labels can be a powerful way to miscommunicate. For an assessment use to be appropriately labeled “formative,” both empirical evidence and reasoned arguments must be offered to support the claim that improvements in student achievement can be linked to the use of assessment information. Our goal in this article is to support the construction of such an argument by offering a framework within which to consider evidence‐based claims that assessment information can be used to improve student achievement. We describe this framework and then illustrate its use with an example of one‐on‐one tutoring. Finally, we explore the framework's implications for understanding when the use of assessment information is likely to improve student achievement and for advising test developers on how to develop assessments that are intended to offer information that can be used to improve student achievement.
In 2018, 26 states administered a college admissions test to all public school juniors. Nearly half of those states proposed to use those scores as their academic achievement indicators for federal accountability under the Every Student Succeeds Act (ESSA); many others are planning to use those scores for other accountability purposes. Accountability encompasses a number of different uses and subsumes a variety of claims. For states proposing to use summative tests for accountability, a validity argument needs to be developed, which entails delineating each specific use of test scores associated with accountability, identifying appropriate evidence, and offering a rebuttal to counterclaims. The aim of this article is to support states in developing a validity argument for use of college admission test scores for accountability by identifying claims that are applicable across states, along with summarizing existing evidence as it relates to each of these claims. As outlined by The Standards for Educational and Psychological Testing, multiple sources of evidence are used to address each claim. A series of threats to the validity argument, including weaker alignment with content standards and potential influences in narrowing teaching, are reviewed. Finally, the article contrasts validity evidence, primarily from research on the ACT, with regulatory requirements from ESSA. The Standards and guidance addressing the use of a “nationally recognized high school academic assessment” (Elementary and Secondary Education Act (ESEA), Negotiated Rulemaking Committee; Department of Education) are the primary sources for the organization of validity evidence.
Do current test development practices align well with the cognitively complex constructs being called for in the educational reform movement? What types of test development practices are needed to develop measures of cognitively complex constructs?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.