2021
DOI: 10.6018/analesps.433051
|View full text |Cite
|
Sign up to set email alerts
|

The small impact of p-hacking marginally significant results on the meta-analytic estimation of effect size

Abstract: La etiqueta p-hacking (pH) se refiere a un conjunto de prácticas oportunistas destinadas a hacer que sean significativos algunos valores p que deberían ser no significativos. Algunos han argumentado que debemos prevenir y luchar contra el pH por varias razones, especialmente debido a sus posibles efectos nocivos en la evaluación de los resultados de la investigación primaria y su síntesis meta-analítica. Nos focalizamos aquí en el efecto de un tipo específico de pH, centrado en estudios marginalmente significa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
1

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 58 publications
(64 reference statements)
0
4
1
Order By: Relevance
“…Contrary to these two simulation studies (Botella et al, 2021; Friese & Frankenbach, 2020), we found that p -hacking with actual data and using fairly generic researcher DFs could cause substantial inflation of meta-analytic average effect sizes also when the average effect appears to be null. Both Friese & Frankenbach (2020) and Botella et al (2021) run extensive simulation studies of the effect of p -hacking across many conditions and Friese & Frankenbach (2020) consider how it interacts with publication bias, something we did not do. We believe the difference in results is due to the choice in both of these simulation studies to p -hack results based on the common assumption that p -hacking leads to a peak of p -values below 0.05 (Hartgerink, 2017).…”
Section: Discussioncontrasting
confidence: 94%
See 4 more Smart Citations
“…Contrary to these two simulation studies (Botella et al, 2021; Friese & Frankenbach, 2020), we found that p -hacking with actual data and using fairly generic researcher DFs could cause substantial inflation of meta-analytic average effect sizes also when the average effect appears to be null. Both Friese & Frankenbach (2020) and Botella et al (2021) run extensive simulation studies of the effect of p -hacking across many conditions and Friese & Frankenbach (2020) consider how it interacts with publication bias, something we did not do. We believe the difference in results is due to the choice in both of these simulation studies to p -hack results based on the common assumption that p -hacking leads to a peak of p -values below 0.05 (Hartgerink, 2017).…”
Section: Discussioncontrasting
confidence: 94%
“…Our modeling of it in this study was relatively straightforward and we only attempted to model the outcomes of intentional selective reporting ( p -hacking). Nonetheless, our biased selection methods applied to empirical RRR data are similar to those used in the simulations of a recent compendium of p -hacking methods Stefan & Schnbrodt (2022) and are on par with other recent simulation studies of p -hacking in a meta-analysis (Botella et al, 2021; Friese & Frankenbach, 2020).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations