Practitioners and policymakers rely on meta-analyses to inform decision making around the allocation of resources to individuals and organizations.It is therefore paramount to consider the validity of these results. A welldocumented threat to the validity of research synthesis results is the presence of publication bias, a phenomenon where studies with large and/or statistically significant effects, relative to studies with small or null effects, are more likely to be published. We investigated this phenomenon empirically by reviewing meta-analyses published in top-tier journals between 1986 and 2013 that quantified the difference between effect sizes from published and unpublished research. We reviewed 383 meta-analyses of which 81 had sufficient information to calculate an effect size. Results indicated that published studies yielded larger effect sizes than those from unpublished studies ( d ; = 0.18, 95% confidence interval [0.10, 0.25]). Moderator analyses revealed that the difference was larger in meta-analyses that included a wide range of unpublished literature. We conclude that intervention researchers require continued support to publish null findings and that meta-analyses should include unpublished studies to mitigate the potential bias from publication status.
Overlap in meta-reviews results from the use of multiple identical primary studies in similar reviews. It is an important area for research synthesists because overlap indicates the degree to which reviews address the same or different literatures of primary research. Current guidelines to address overlap suggest that assessing and documenting the degree of overlap in primary studies, calculated via the corrected covered area (CCA) is a promising method. Yet, the CCA is a simple percentage of overlap and current guidelines do not detail ways that reviewers can use the CCA as a diagnostic tool while also comprehensively incorporating these findings into their conclusions. Furthermore, we maintain that meta-review teams must address non-independence via overlap more thoroughly than by simply estimating and reporting the CCA. Instead, we recommend and elaborate five steps to take when examining overlap, illustrating these steps through the use of an empirical example of primary study overlap in a recently conducted meta-review. This work helps to show that overlap of primary studies included in a meta-review is not necessarily a bias but often can be a benefit. We also highlight further areas of caution in this task and potential for the development of new tools to address nonindependence issues.
K E Y W O R D Scitation matrix, corrected covered area, meta-review, overlap, overview
Systematic review and meta-analysis are possible as viable research techniques only through transparent reporting of primary research; thus, one might expect meta-analysts to demonstrate best practice in their reporting of results and have a high degree of transparency leading to reproducibility of their work. This assumption has yet to be fully tested in the psychological sciences. We therefore aimed to assess the transparency and reproducibility of psychological meta-analyses. We conducted a meta-review by sampling 150 studies from Psychological Bulletin to extract information about each review’s transparent and reproducible reporting practices. The results revealed that authors reported on average 55% of criteria and that transparent reporting practices increased over the three decades studied ( b = 1.09, SE = 0.24, t = 4.519, p < .001). Review authors consistently reported eligibility criteria, effect-size information, and synthesis techniques. Review authors, however, on average, did not report specific search results, screening and extraction procedures, and most importantly, effect-size and moderator information from each individual study. Far fewer studies provided statistical code required for complete analytical replication. We argue that the field of psychology and research synthesis in general should require review authors to report these elements in a transparent and reproducible manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.