2016
DOI: 10.1257/app.20150044
|View full text |Cite
|
Sign up to set email alerts
|

Star Wars: The Empirics Strike Back

Abstract: jected tests. Our interpretation is that researchers might be tempted to inflate the value of those just-rejected tests by choosing a "significant" specification. We propose a method to measure this residual and describe how it varies by article and author characteristics.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

15
311
2

Year Published

2017
2017
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 261 publications
(350 citation statements)
references
References 37 publications
15
311
2
Order By: Relevance
“…The contribution of this paper is to rigorously study "what economic importance means" and "how to measure it." It complements recent works on how to raise standards and increase transparency in applied economics (see e.g., Brodeur et al (2016); Franco et al (2014) on p-value hacking and publication bias; Maniadis et al (2017) or Camerer et al (2016) on replicability; Pritchett and Sandefur (2015) or Vivalt (2015) on external validity; Oster (2019) or Altonji et al (2005) on the impact of unobservables; Miguel et al (2014) on experimentation in social sciences; Olken (2015) on pre-analysis plans).…”
Section: Introductionsupporting
confidence: 55%
“…The contribution of this paper is to rigorously study "what economic importance means" and "how to measure it." It complements recent works on how to raise standards and increase transparency in applied economics (see e.g., Brodeur et al (2016); Franco et al (2014) on p-value hacking and publication bias; Maniadis et al (2017) or Camerer et al (2016) on replicability; Pritchett and Sandefur (2015) or Vivalt (2015) on external validity; Oster (2019) or Altonji et al (2005) on the impact of unobservables; Miguel et al (2014) on experimentation in social sciences; Olken (2015) on pre-analysis plans).…”
Section: Introductionsupporting
confidence: 55%
“…For instance, Card and Krueger (1995) show that the t-statistics of studies assessing the effect of the minimum wage on employment also gravitate around 2.00. By a similar but more sophisticated analysis Brodeur et al (2016) show that t-statistics reported in articles published in top economic journals are characterized by an atypical distribution, irrespective of the research area. Fanelli (2010) document that such biases are pervasive among most empirical sciences such as medicine, biology, sociology, or psychology.…”
Section: Resultsmentioning
confidence: 86%
“…Consequently, DEL's result might be plagued by such bias, as both the dependent and the explanatory variables are measured with errors, as further discussed below. Alternatively, DEL's finding might also be a result of " p-hacking" 8 (e.g., Simmons et al, 2011;Brooder et al, 2016), because DEL only report results from one measure of preferences for redistribution, even though there are other survey items measuring the same construct. 9 Anyway, to rule out these types of problems in DEL's analysis, it is important to test whether these alternative measures of preferences confirm DEL's result or not.…”
Section: Reliability and Validity Of The Measure Of Preferencesmentioning
confidence: 99%