2016
DOI: 10.31219/osf.io/umq8d
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Degrees of freedom in planning, running, analyzing, and reporting psychological studies A checklist to avoid p-hacking

Abstract: The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
132
0
4

Year Published

2018
2018
2023
2023

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 108 publications
(137 citation statements)
references
References 34 publications
0
132
0
4
Order By: Relevance
“…Because these path models were fully saturated, fit was perfect; consequently, we omit reporting fit indices. We chose path models over structural equation models (SEMs) because the former is more parsimonious than the latter, and because specifying the measurement aspects of SEMs (e.g., choices concerning item parceling and correlated error structures) often expands researcher degrees‐of‐freedom (Simmons, Nelson, & Simonsohn, ; Wicherts et al, ) and can thus hamper future reproducibility efforts. All continuous predictor variables were grand‐mean centered (i.e., centered around mean of all scores across both individuals in a couple, or in other words, their collective grand mean; Kashy & Donnellan, ) and gender was contrast coded (0.5 = men; −0.5 = women).…”
Section: Methodsmentioning
confidence: 99%
“…Because these path models were fully saturated, fit was perfect; consequently, we omit reporting fit indices. We chose path models over structural equation models (SEMs) because the former is more parsimonious than the latter, and because specifying the measurement aspects of SEMs (e.g., choices concerning item parceling and correlated error structures) often expands researcher degrees‐of‐freedom (Simmons, Nelson, & Simonsohn, ; Wicherts et al, ) and can thus hamper future reproducibility efforts. All continuous predictor variables were grand‐mean centered (i.e., centered around mean of all scores across both individuals in a couple, or in other words, their collective grand mean; Kashy & Donnellan, ) and gender was contrast coded (0.5 = men; −0.5 = women).…”
Section: Methodsmentioning
confidence: 99%
“…Going beyond the discussions about the difficulty of avoiding the introduction of confounding variables, this literature identifies common and even recommended research practices that may dramatically increase the likelihood of false‐positive results. Even such common practices such as trying to collect additional data to see whether a null result is due to limited statistical power, or doing due diligence by using multiple dependent variables or specifications of dependent variables, testing for moderators or including covariates in multivariate statistical analyses can result in unintentional p ‐fishing (Simmons, Nelson, and Simonsohn ; Wicherts et al ). Although there are a lot of lessons from this literature for both researchers and reviewers, some simple solutions exist that can increase our confidence in the studies, such as requiring that researchers report bivariate experimental results (i.e., means tests) without the covariates to show the effects of the manipulated variable before reporting the multivariate analyses with covariates or nonmanipulated moderators…”
Section: “Methodolotry”: Applying More Lessons From Neighboring Discimentioning
confidence: 99%
“…Establishing a preregistration requirement for RWD analyses is an effective approach to reduce the risk of multiple hypotheses testing and p‐hacking . However, many observational datasets, such as Medicare, Medicaid, and Marketscan, are available and fully accessible to the researchers before the conception of specific RWD studies, which makes adequate preregistration a significant challenge.…”
Section: The Way Forward: How Can We Capture Value From Rwe To Effectmentioning
confidence: 99%