When analyzing a heterogeneous body of literature, there may be many potentially relevant between-studies differences. These differences can be coded as moderators, and accounted for using meta-regression. However, many applied meta-analyses lack the power to adequately account for multiple moderators, as the number of studies on any given topic is often low. The present study introduces Bayesian Regularized Meta-Analysis (BRMA), an exploratory algorithm that can select relevant moderators from a larger number of candidates. This approach is suitable when heterogeneity is suspected, but it is not known which moderators most strongly influence the observed effect size. We present a simulation study to validate the performance of BRMA relative to state-of-the-art meta-regression (RMA). Results indicated that BRMA compared favorably to RMA on three metrics: predictive performance, which is a measure of the generalizability of results, the ability to reject irrelevant moderators, and the ability to recover population parameters with low bias. BRMA had slightly lower ability to detect true effects of relevant moderators, but the overall proportion of Type I and Type II errors was equivalent to RMA. Furthermore, BRMA regression coefficients were slightly biased towards zero (by design), but its estimates of residual heterogeneity were unbiased. BRMA performed well with as few as 20 studies in the training data, suggesting its suitability as a small sample solution. We discuss how applied researchers can use BRMA to explorate between-studies heterogeneity in meta-analysis.
When meta-analyzing heterogeneous bodies of literature, meta-regression can be used to account for potentially relevant between-studies differences. A key challenge is that the number of candidate moderators is often high relative to the number of studies. This introduces risks of overfitting, spurious results, and model non-convergence. To overcome these challenges, we introduce Bayesian Regularized Meta-Analysis (BRMA), which selects relevant moderators from a larger set of candidates by shrinking small regression coefficients towards zero with regularizing (LASSO or horseshoe) priors. This method is suitable when there are many potential moderators, but it is not known beforehand which of them are relevant. A simulation study compared BRMA against state-of-the-art random effects meta-regression using restricted maximum likelihood (RMA). Results indicated that BRMA outperformed RMA on three metrics: BRMA had superior predictive performance, which means that the results generalized better; BRMA was better at rejecting irrelevant moderators, and worse at detecting true effects of relevant moderators, while the overall proportion of Type I and Type II errors was equivalent to RMA. BRMA regression coefficients were slightly biased towards zero (by design), but its residual heterogeneity estimates were less biased than those of RMA. BRMA performed well with as few as 20 studies, suggesting its suitability as a small sample solution. We present free open source software implementations in the R-package pema (for penalized meta-analysis) and in the stand-alone statistical program JASP. An applied example demonstrates the use of the R-package.
The product Bayes factor (PBF) can synthesize evidence for an informative hypothesis across heterogeneous replication studies. It is particularly useful when the number of studies is relatively low and conventional assumptions about between-studies heterogeneity are likely violated. The present paper introduces a user-friendly implementation of the PBF in the bain R-package. The method was validated in a simulation study that manipulated sample size, number of replication samples, and reliability. Several tutorial examples demonstrate the use of the method in distinct use cases. Results of the simulation study show that PBF had a higher overall accuracy when benchmarked against other evidence synthesis methods, including random-effects meta-analysis (RMA). This was primarily due to PBF’s greater sensitivity in detecting a true effect. However, PBF had relatively lower specificity. The PBF showed increasing sensitivity and specificity with increasing sample size. With an increasing number of samples, lower sensitivity was traded for greater specificity. Although PBF's overall performance was less susceptible to reliability than the other algorithms, this masked a trade-off between reliability and specificity. PBF thus appears to be a promising method for meta-analysis of heterogeneous conceptual replication studies. Nonetheless, users should be aware of its lower specificity, and the fact that the Bayesian approach to inference addresses a qualitatively different research question than other evidence synthesis methods.
In social and behavioral science, the gold standard for scientific evidence is finding results that are consistent across independent studies. To summarize results from multiple studies, parameter estimates are conventionally aggregated with meta-analysis. However, this method is limited to studies that share the same context and design, which often means that a wealth of information remains unexploited. This paper proposes evidence aggregation using GORIC(A) weights: an alternative and/or complementary statistical tool for the aggregation of evidence across studies. Rather than aggregating parameter estimates to come to an overall estimate, GORIC(A) evidence aggregation combines support for a shared central theory and quantifies the overall support. It does so using GORIC(A), an information criterion that can evaluate both equality and inequality/order restrictions. GORIC(A) can be applied to a single study, and this GORIC(A) evidence can be aggregated over multiple studies, irrespective of context or design. The method is validated with a simulation study that shows that GORIC(A) evidence aggregation is not affected by study heterogeneity and can be used for evidence synthesis. This implies that GORIC(A) evidence aggregation can successfully combine evidence for a central theory over a widely diverse set of studies. This increases the available information to investigate a theory. Furthermore, GORIC(A) evidence aggregation aids in robustness and confidence of results because it can take into account the results of all type of studies that examine the central theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.