2018
DOI: 10.31234/osf.io/9y3mp
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Null-hacking, a lurking problem

Abstract: Pre-registration of analysis plans involves making data-analysis decisions before the data is run in order to prevent flexibly re-running it until a specific result appears (p-hacking). Just because a model and result is pre-registered, however, does not make it true. The complement to p-hacking, null-hacking, is the use of the same questionable research practices to re-analyze open data to return a null finding. We provide a vocabulary for null-hacking and introduce the threat it poses to open science and pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…It is now widely appreciated that original investigators face a conflict between the desire for accuracy and the career incentive to discover statistically significant results. There seems to be a widespread implicit presumption, however, that investigators who undertake replication tests are not subject to similar conflicts, but there are good reasons to believe that they are (16, 17). As replicability and research integrity have become topics of increasing interest, failures to replicate important original findings have begun to be published in top journals (1, 46, 18), while successful direct replications of existing findings receive much less attention.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…It is now widely appreciated that original investigators face a conflict between the desire for accuracy and the career incentive to discover statistically significant results. There seems to be a widespread implicit presumption, however, that investigators who undertake replication tests are not subject to similar conflicts, but there are good reasons to believe that they are (16, 17). As replicability and research integrity have become topics of increasing interest, failures to replicate important original findings have begun to be published in top journals (1, 46, 18), while successful direct replications of existing findings receive much less attention.…”
mentioning
confidence: 99%
“…The one-sided focus on “p-hacking,” the motivated pursuit of statistically significant results by original investigators, ignores (and arguably, contributes to) a new threat to research integrity posed by “null hacking,” the motivated pursuit of null results by replicating investigators (16). The purpose of this article is to demonstrate that replicator degrees of freedom, defined as discretion exercised at 2 stages of the replication process—experimental design and data analysis—can cause replicating investigators to arrive at incorrect conclusions about the replicability of an original finding.…”
mentioning
confidence: 99%
“…There are also a number of reasons why replication efforts may fail to replicate a real effect including lack of power in the replications (Cohen, 1969), lack of fidelity among researchers to the procedures of the original study (see Gilbert et al, 2016b), unacknowledged variance in auxiliary assumptions (Earp & Trafimow, 2015), deliberate questionable research practices used by the replicator to show a lack of evidence (e.g., Protzko, 2018), among others.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, higher extra pointless values (e.g., 0.20 to 0.80) can be used to rule out unloved control variables or support a researcher's effort to discredit seminal research findings. Specifically, the ongoing debates on replicability in various fields of sciences (e.g., Ankel-Peters et al, 2023;Camerer et al, 2016;Camerer et al, 2018;Dennis et al, 2020;Nosek et al, 2022;Page et al, 2021) further sparked the need for unambiguously unsuccessful replication studies with p-values far above 0.05 to boost the replicator's reputation as a scientific myth-buster (Bryan et al, 2019;Protzko, 2018). The extra pointless metric will also help with the publication of responses to such criticism that successfully replicate the effect under scrutiny, thereby fueling a never-ending back-andforth of commentaries and rejoinders (i.e., more publications).…”
Section: Methodology and Resultsmentioning
confidence: 99%