Formal empirical assessments of replication have recently become more prominent in several areas of science, including psychology. These assessments have used different statistical approaches to determine if a finding has been replicated. The purpose of this article is to provide several alternative conceptual frameworks that lead to different statistical analyses to test hypotheses about replication. All of these analyses are based on statistical methods used in meta-analysis. The differences among the methods described involve whether the burden of proof is placed on replication or nonreplication, whether replication is exact or allows for a small amount of "negligible heterogeneity," and whether the studies observed are assumed to be fixed (constituting the entire body of relevant evidence) or are a sample from a universe of possibly relevant studies. The statistical power of each of these tests is computed and shown to be low in many cases, raising issues of the interpretability of tests for replication. (PsycINFO Database Record
The problem of assessing whether experimental results can be replicated is becoming increasingly important in many areas of science. It is often assumed that assessing replication is straightforward: All one needs to do is repeat the study and see whether the results of the original and replication studies agree. This article shows that the statistical test for whether two studies obtain the same effect is smaller than the power of either study to detect an effect in the first place. Thus, unless the original study and the replication study have unusually high power (e.g., power of 98%), a single replication study will not have adequate sensitivity to provide an unambiguous evaluation of replication.
In this study, we reanalyze recent empirical research on replication from a meta-analytic perspective. We argue that there are different ways to define "replication failure," and that analyses can focus on exploring variation among replication studies or assess whether their results contradict the findings of the original study. We apply this framework to a set of psychological findings that have been replicated and assess the sensitivity of these analyses. We find that tests for replication that involve only a single replication study are almost always severely underpowered. Among the 40 findings for which ensembles of multisite direct replications were conducted, we find that between 11 and 17 (28% to 43%) ensembles produced heterogeneous effects, depending on how replication is defined. This heterogeneity could not be completely explained by moderators documented by replication research programs. We also find that these ensembles were not always well-powered to detect potentially meaningful values of heterogeneity. Finally, we identify several discrepancies between the results of original studies and the distribution of effects found by multisite replications but note that these analyses also have low power. We conclude by arguing that efforts to assess replication would benefit from further methodological work on designing replication studies to ensure analyses are sufficiently sensitive. Public Significance StatementReplication is critical to building reliable scientific knowledge. This article argues that a metaanalytic approach can shed greater light on whether a finding is replicable and applies this approach to empirical research on replication in psychology. It also reports the sensitivity of those analyses.
Missing covariates is a common issue when fitting meta-regression models. Standard practice for handling missing covariates tends to involve one of two approaches. In a complete-case analysis, effect sizes for which relevant covariates are missing are omitted from model estimation. Alternatively, researchers have employed the so-called "shifting units of analysis" wherein complete-case analyses are conducted on only certain subsets of relevant covariates. In this article, we clarify conditions under which these approaches generate unbiased estimates of regression coefficients. We find that unbiased estimates are possible when the probability of observing a covariate is completely independent of effect sizes.When that does not hold, regression coefficient estimates may be biased. We study the potential magnitude of that bias assuming a log-linear model of missingness and find that the bias can be substantial, as large as Cohen's d = 0.4-0.8 depending on the missingness mechanism. K E Y W O R D Scomplete-case analysis, meta-regression, missing data, shifting units of analysis HighlightsMissing covariates are a common problem when conducting meta-regressions. A common practice for meta-regression analyses has been to ignore effects for which covariates are missing. However, a vast statistical literature suggests that analyses that ignore missing data can only provide accurate estimates of relevant quantitites under certain conditions. In this article, we examine conditions under which ignoring missing covariates in a meta-regression can still lead to unbiased estimation of regression coefficients. We also investigate the possible magnitude and sources of bias when those conditions do not hold. Our findings highlight that substantial bias can be induced by ignoring missing data in a meta-regression. | INTRODUCTIONMeta-regression is a useful tool for studying important sources of variation between effects in a meta-analysis. 1,2 Analyses of these models in the absence of missing data have been studied thoroughly in the literature. [3][4][5][6][7] However, it is common for meta-analytic datasets to be missing data. 8 In the context of meta-regression,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.