2017
DOI: 10.31219/osf.io/gv25c
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Effect of Publication Bias on the Assessment of Heterogeneity

Abstract: One of the main goals of meta-analysis is to test and estimate the heterogeneity of effect size. We examined the effect of publication bias on the Q-test and assessments of heterogeneity, as a function of true heterogeneity, publication bias, true effect size, number of studies, and variation of sample sizes. The expected values of heterogeneity measures H2 and I2 were analytically derived, and the power and the type I error rate of the Q-test were examined in a Monte-Carlo simulation study. Our results show t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 28 publications
(3 reference statements)
0
7
0
Order By: Relevance
“…Instead of factoring out study idiosyncrasies and protecting the consumers of research from single-study biases, an unadjusted meta-analysis (under publication bias) tends to find a significant overall effect regardless of whether it is there or not. If there is a publication bias, including more NEGLECT OF PUBLICATION BIAS IN EDUCATIONAL RESEARCH published studies only exacerbates the problem (Augusteijn et al, 2019). Evidence synthesis of biased literature, therefore, does not represent a severe test, making fallacious theories hardly falsifiable (see Mayo, 2018).…”
Section: Neglect Of Publication Bias Compromises Meta-analyses Of Edumentioning
confidence: 99%
“…Instead of factoring out study idiosyncrasies and protecting the consumers of research from single-study biases, an unadjusted meta-analysis (under publication bias) tends to find a significant overall effect regardless of whether it is there or not. If there is a publication bias, including more NEGLECT OF PUBLICATION BIAS IN EDUCATIONAL RESEARCH published studies only exacerbates the problem (Augusteijn et al, 2019). Evidence synthesis of biased literature, therefore, does not represent a severe test, making fallacious theories hardly falsifiable (see Mayo, 2018).…”
Section: Neglect Of Publication Bias Compromises Meta-analyses Of Edumentioning
confidence: 99%
“…van Aert and colleagues even went so far as to say "In case of strong evidence or strong indications of p-hacking, [metaanalysts should] be reluctant in interpreting estimates of traditional meta-analytic techniques and p-uniform and p-curve because their effect-size estimates may be biased in any direction depending on the type of p-hacking used" (p.714). It is also currently unknown how QRPs affect heterogeneity estimates (Augusteijn et al, 2019). Carter et al's (2019) simulations suggest that the effects of publication bias may be somewhat larger than the effects of QRPs under commonly observed conditions in the social sciences, although the generalizability of this conclusion might depend on how closely the conditions of their simulations do match the real world, as well as which type and combination of QRPs are used.…”
Section: Questionable Research Practicesmentioning
confidence: 99%
“…Further, the weight function, even if estimated, may not be well recovered by the model (Hedges & Vevea, 2005). Crucially, the performance of the model depends on the accuracy of the selection function, which is to some extent unknowable (Augusteijn et al, 2019). Carter et al (2019) report that the 3PSM showed reasonable type I error rates but was underpowered under many simulated conditions that match those often observed in the social sciences.…”
Section: Selection Models: P-curve P-uniform and Three-parameter Sementioning
confidence: 99%
See 2 more Smart Citations