2022
DOI: 10.1257/mic.20190133
|View full text |Cite
|
Sign up to set email alerts
|

Which Findings Should Be Published?

Abstract: Given a scarcity of journal space, what is the optimal rule for whether an empirical finding should be published? Suppose publications inform the public about a policy-relevant state. Then journals should publish extreme results, meaning ones that move beliefs sufficiently. This optimal rule may take the form of a one- or two-sided test comparing a point estimate to the prior mean, with critical values determined by a cost-benefit analysis. Consideration of future studies may additionally justify the publicati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…Although experts have been reported to be good at predicting which study results will replicate in their field (Dreber et al, 2015), journals continue to publish less rigorous research largely because the novelty of results is prioritized over the strength of research designs. Frankel and Kasy (2022) discuss the optimal rule for allocating scarce journal space. If journals are trying to inform public policy, they may want to publish studies with the largest effect sizes that move beliefs sufficiently.…”
Section: Design Matters More Than Resultsmentioning
confidence: 99%
“…Although experts have been reported to be good at predicting which study results will replicate in their field (Dreber et al, 2015), journals continue to publish less rigorous research largely because the novelty of results is prioritized over the strength of research designs. Frankel and Kasy (2022) discuss the optimal rule for allocating scarce journal space. If journals are trying to inform public policy, they may want to publish studies with the largest effect sizes that move beliefs sufficiently.…”
Section: Design Matters More Than Resultsmentioning
confidence: 99%
“…The research community has also proposed other avenues to tackle these issues, e.g., by identification and detection methods for false discoveries and p-hacking (Simonsohn et al (2014a,b), Elliott et al (2022)), and even solutions via model averaging (Moral-Benito (2015), Steel (2020)), extreme bound analysis (Leamer and Leonard (1983), Granger and Uhlig (1990)), correction measures (Andrews and Kasy (2019)), shrinkage (van Zwet and Cator (2021)), robust aggregation (Rytchkov and Zhong (2020)), noise dissemination (Echenique and He (2023)), new critical values (McCloskey and Michaillat (2023)), specification curve analysis (Simonsohn et al (2020)), or Bayesian publication decisions (Frankel and Kasy (2022)).…”
Section: Replication: a Necessity And A Challengementioning
confidence: 99%
“…3 Here our focus is on understanding whether we can detect p-hacking, and we do not explicitly model selective publication (e.g., Andrews and Kasy, 2019;Kasy, 2021;Frankel and Kasy, 2022). Publication bias interacting with p-hacking is likely to increase the rejection probability of tests.…”
Section: Introductionmentioning
confidence: 99%