2020
DOI: 10.35542/osf.io/jzqs8
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bayesian Psychometrics for Diagnostic Assessments: A Proof of Concept

Abstract: Diagnostic assessments measure the knowledge, skills, and understandings of students at a smaller and more actionable grain size than traditional scale-score assessments. Results of diagnostic assessments are reported as a mastery profile, indicating which knowledge, skills, and understandings the student has mastered and which ones may need more instruction. These mastery decisions are based on probabilities of mastery derived from diagnostic classification models (DCMs).This report outlines a Bayesian framew… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 37 publications
0
2
0
Order By: Relevance
“…For example, many best practices for educational assessments (e.g., AERA et al, 2014) have an implicit assumption that assessments result in a scale score. Therefore, many of the standards recommend evidence that either is inappropriate or requires adaptations for diagnostic assessments (Thompson et al, 2021). Perhaps unsurprisingly, this implicit assumption has also crept into federal regulations for accountability assessments, for example, in comparability requirements for states attempting to implement innovative assessments (Marion, 2023;Rupp, 2023).…”
Section: Discussionmentioning
confidence: 99%
“…For example, many best practices for educational assessments (e.g., AERA et al, 2014) have an implicit assumption that assessments result in a scale score. Therefore, many of the standards recommend evidence that either is inappropriate or requires adaptations for diagnostic assessments (Thompson et al, 2021). Perhaps unsurprisingly, this implicit assumption has also crept into federal regulations for accountability assessments, for example, in comparability requirements for states attempting to implement innovative assessments (Marion, 2023;Rupp, 2023).…”
Section: Discussionmentioning
confidence: 99%
“…For example, many best practices for educational assessments (e.g., AERA et al, 2014) have an implicit assumption that assessments result in a scale score. Therefore, many of the standards recommend evidence that either is inappropriate or requires adaptations for diagnostic assessments (Thompson et al, 2021). Perhaps unsurprisingly, this implicit assumption has also crept into federal regulations for accountability assessments, for example, in comparability requirements for states attempting to implement innovative assessments (Marion, 2023;Rupp, 2023).…”
Section: Discussionmentioning
confidence: 99%
“…Absolute fit was assessed through the χ 2 obs statistic, calculated from the posterior predictive model checks (see Thompson, 2019). For this statistic, a ppp value of less than 0.05 generally indicates insufficient fit of the model to the observed data.…”
Section: Methods 1: Patterns Of Mastery Profilesmentioning
confidence: 99%