2023
DOI: 10.1037/rev0000421
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the complexity and falsifiability of psychological models.

Abstract: Understanding model complexity is important for developing useful psychological models. One way to think about model complexity is in terms of the predictions a model makes and the ability of empirical evidence to falsify those predictions. We argue that existing measures of falsifiability have important limitations and develop a new measure. KL-delta uses Kullback–Leibler divergence to compare the prior predictive distributions of models to the data prior that formalizes knowledge about the plausibility of di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 68 publications
0
1
0
Order By: Relevance
“…Passing this check would strengthen claims of replication success by ensuring that the data patterns do not resemble the disparate matrices at the top of Figure 1. By directly checking the data patterns, this method is conceptually aligned with specification of a data prior, which has been recently recommended as a tool for stronger theory testing (Vanpaemel, 2020;Villarreal et al, 2023). In the present context, prior specification greatly benefits from reliance on the original findings.…”
Section: Methodsmentioning
confidence: 99%
“…Passing this check would strengthen claims of replication success by ensuring that the data patterns do not resemble the disparate matrices at the top of Figure 1. By directly checking the data patterns, this method is conceptually aligned with specification of a data prior, which has been recently recommended as a tool for stronger theory testing (Vanpaemel, 2020;Villarreal et al, 2023). In the present context, prior specification greatly benefits from reliance on the original findings.…”
Section: Methodsmentioning
confidence: 99%
“…1 The three examples above serve to illustrate the ubiquity of nonlinear reparameterizations possible for scientific models across psychology and related fields. Despite a popular preference to represent models using certain "canonical" parameterizations (e.g., Kruschke, 2018), ultimately the epistemic content of a model is not in its parameters but in the distributions it generates over data (Villarreal et al, 2023). There is always an alternative parameterization that could be meaningfully understood; good methods should give us the same conclusions across them all.…”
Section: The Circular Drift-diffusion Modelmentioning
confidence: 99%
“…The three examples above serve to illustrate the ubiquity of nonlinear reparameterizations possible for scientific models across psychology and related fields. Despite a popular preference to represent models using certain "canonical" parameterizations (e.g., Kruschke, 2018), ultimately the epistemic content of a model is not in its parameters but in the distributions it generates over data (Villarreal, Etz, & Lee, 2023). There is always an alternative parameterization that could be meaningfully understood; good methods should give us the same conclusions across them all.…”
Section: Kimura Phylogenetic Modelmentioning
confidence: 99%