2012
DOI: 10.1002/tea.21030
|View full text |Cite
|
Sign up to set email alerts
|

Developing and evaluating instructionally sensitive assessments in science

Abstract: The purpose of this article is to address a major gap in the instructional sensitivity literature on how to develop instructionally sensitive assessments. We propose an approach to developing and evaluating instructionally sensitive assessments in science and test this approach with one elementary lifescience module. The assessment we developed was administered to 125 students in seven classrooms. The development approach considered three dimensions of instructional sensitivity; that is, assessment items shoul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0
7

Year Published

2013
2013
2021
2021

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(35 citation statements)
references
References 29 publications
0
28
0
7
Order By: Relevance
“…Given that learning measures need to be sensitive to gauge differences between case‐based and traditional lecture, it is important that assessment “fit the philosophy of active learning rather than passive reproductive learning” (Reynolds, , p. 22). Ruiz‐Primo et al () also argued for the need to develop instructionally sensitive assessments that reflect not only the content being taught, but also the quality of instruction. Previous research has suggested that open‐ended assessments that measure conceptual understanding are more likely to show influence of student‐centered instruction than fact‐based assessments (Gallagher, ).…”
Section: Case Studies In Engineeringmentioning
confidence: 99%
“…Given that learning measures need to be sensitive to gauge differences between case‐based and traditional lecture, it is important that assessment “fit the philosophy of active learning rather than passive reproductive learning” (Reynolds, , p. 22). Ruiz‐Primo et al () also argued for the need to develop instructionally sensitive assessments that reflect not only the content being taught, but also the quality of instruction. Previous research has suggested that open‐ended assessments that measure conceptual understanding are more likely to show influence of student‐centered instruction than fact‐based assessments (Gallagher, ).…”
Section: Case Studies In Engineeringmentioning
confidence: 99%
“…Haladyna and Roid () as well as Polikoff () emphasize the use of PPDI because it is technically easy to implement and conceptually easy to understand. In an experimental study, Ruiz‐Primo and colleagues () found items’ PPDI to be proportional to the alignment of item characteristics and the implemented curriculum. Also, item selection based on PPDI does not negatively impact reliability (Crehan, ).…”
Section: The Issue Of Instructional Sensitivitymentioning
confidence: 99%
“…Oftentimes, it is the researchers themselves who would know best how to make instruments sufficiently sensitive to the treatment effect. One approach to reconciling this conflict is to employ a series of assessments or subscales that range in distance from the students' experiences with the intervention (Ruiz‐Primo, Shavelson, Hamilton, & Klein, , Ruiz‐Primo et al, ). That is, the assessments include a set of “close” items that are very closely aligned with students' experiences with the intervention; “proximal” items that are aligned with the learning objectives of the intervention; and distal items that are aligned with national or state standards.…”
Section: Requirements Of Studies Of Causal Effects and Associated Impmentioning
confidence: 99%