Summary Publication bias is a serious problem in systematic reviews and meta-analyses, which can affect the validity and generalization of conclusions. Currently, approaches to dealing with publication bias can be distinguished into two classes: selection models and funnel-plot-based methods. Selection models use weight functions to adjust the overall effect size estimate and are usually employed as sensitivity analyses to assess the potential impact of publication bias. Funnel-plot-based methods include visual examination of a funnel plot, regression and rank tests, and the nonparametric trim and fill method. Although these approaches have been widely used in applications, measures for quantifying publication bias are seldom studied in the literature. Such measures can be used as a characteristic of a meta-analysis; also, they permit comparisons of publication biases between different meta-analyses. Egger’s regression intercept may be considered as a candidate measure, but it lacks an intuitive interpretation. This article introduces a new measure, the skewness of the standardized deviates, to quantify publication bias. This measure describes the asymmetry of the collected studies’ distribution. In addition, a new test for publication bias is derived based on the skewness. Large sample properties of the new measure are studied, and its performance is illustrated using simulations and three case studies.
Publication bias is a type of systematic error when synthesizing evidence that cannot represent the underlying truth. Clinical studies with favorable results are more likely published and thus exaggerate the synthesized evidence in meta-analyses. The trim-and-fill method is a popular tool to detect and adjust for publication bias. Simulation studies have been performed to assess this method, but they may not fully represent realistic settings about publication bias. Based on real-world meta-analyses, this article provides practical guidelines and recommendations for using the trim-and-fill method. We used a worked illustrative example to demonstrate the idea of the trim-and-fill method, and we reviewed three estimators ( R 0 , L 0 , and Q 0 ) for imputing missing studies. A resampling method was proposed to calculate P values for all 3 estimators. We also summarized available meta-analysis software programs for implementing the trim-and-fill method. Moreover, we applied the method to 29,932 meta-analyses from the Cochrane Database of Systematic Reviews , and empirically evaluated its overall performance. We carefully explored potential issues occurred in our analysis. The estimators L 0 and Q 0 detected at least one missing study in more meta-analyses than R 0 , while Q 0 often imputed more missing studies than L 0 . After adding imputed missing studies, the significance of heterogeneity and overall effect sizes changed in many meta-analyses. All estimators generally converged fast. However, L 0 and Q 0 failed to converge in a few meta-analyses that contained studies with identical effect sizes. Also, P values produced by different estimators could yield different conclusions of publication bias significance. Outliers and the pre-specified direction of missing studies could have influential impact on the trim-and-fill results. Meta-analysts are recommended to perform the trim-and-fill method with great caution when using meta-analysis software programs. Some default settings (e.g., the choice of estimators and the direction of missing studies) in the programs may not be optimal for a certain meta-analysis; they should be determined on a case-by-case basis. Sensitivity analyses are encouraged to examine effects of different estimators and outlying studies. Also, the trim-and-fill estimator should be routinely reported in meta-analyses, because the results depend highly on it.
Meta‐analyses have been increasingly used to synthesize proportions (eg, disease prevalence) from multiple studies in recent years. Arcsine‐based transformations, especially the Freeman–Tukey double‐arcsine transformation, are popular tools for stabilizing the variance of each study's proportion in two‐step meta‐analysis methods. Although they offer some benefits over the conventional logit transformation, they also suffer from several important limitations (eg, lack of interpretability) and may lead to misleading conclusions. Generalized linear mixed models and Bayesian models are intuitive one‐step alternative approaches, and can be readily implemented via many software programs. This article explains various pros and cons of the arcsine‐based transformations, and discusses the alternatives that may be generally superior to the currently popular practice.
Given the relatively low agreement between many publication bias tests, meta-analysts should not rely on a single test and may apply multiple tests with various assumptions. Non-statistical approaches to evaluating publication bias (e.g., searching clinical trials registries, records of drug approving agencies, and scientific conference proceedings) remain essential.
Epidemiologic research often involves meta-analyses of proportions. Conventional two-step methods first transform each study’s proportion and subsequently perform a meta-analysis on the transformed scale. They suffer from several important limitations: the log and logit transformations impractically treat within-study variances as fixed, known values and require ad hoc corrections for zero counts; the results from arcsine-based transformations may lack interpretability. Generalized linear mixed models (GLMMs) have been recommended in meta-analyses as a one-step approach to fully accounting for within-study uncertainties. However, they are seldom used in current practice to synthesize proportions. This article summarizes various methods for meta-analyses of proportions, illustrates their implementations, and explores their performance using real and simulated datasets. In general, GLMMs led to smaller biases and mean squared errors and higher coverage probabilities than two-step methods. Many software programs are readily available to implement these methods.
BackgroundMeta-analyses frequently include studies with small sample sizes. Researchers usually fail to account for sampling error in the reported within-study variances; they model the observed study-specific effect sizes with the within-study variances and treat these sample variances as if they were the true variances. However, this sampling error may be influential when sample sizes are small. This article illustrates that the sampling error may lead to substantial bias in meta-analysis results.MethodsWe conducted extensive simulation studies to assess the bias caused by sampling error. Meta-analyses with continuous and binary outcomes were simulated with various ranges of sample size and extents of heterogeneity. We evaluated the bias and the confidence interval coverage for five commonly-used effect sizes (i.e., the mean difference, standardized mean difference, odds ratio, risk ratio, and risk difference).ResultsSampling error did not cause noticeable bias when the effect size was the mean difference, but the standardized mean difference, odds ratio, risk ratio, and risk difference suffered from this bias to different extents. The bias in the estimated overall odds ratio and risk ratio was noticeable even when each individual study had more than 50 samples under some settings. Also, Hedges’ g, which is a bias-corrected estimate of the standardized mean difference within studies, might lead to larger bias than Cohen’s d in meta-analysis results.ConclusionsCautions are needed to perform meta-analyses with small sample sizes. The reported within-study variances may not be simply treated as the true variances, and their sampling error should be fully considered in such meta-analyses.
Publication bias occurs when studies with statistically significant results have increased likelihood of being published. Publication bias is commonly associated with inflated treatment effect which lowers the certainty of decision makers about the evidence. In this guide we propose that systematic reviewers and decision makers consider the direction and magnitude of publication bias, as opposed to just the binary determination of the presence of this bias, before lowering their certainty in the evidence. Direction of bias may not always exaggerate the treatment effect. The presence of bias with a trivial magnitude may not affect the decision at hand. Various statistical approaches are available to determine the direction and magnitude of publication bias.
It is common to measure continuous outcomes using different scales (eg, quality of life, severity of anxiety or depression), therefore these outcomes need to be standardized before pooling in a meta-analysis. Common methods of standardization include using the standardized mean difference, the odds ratio derived from continuous data, the minimally important difference, and the ratio of means. Other ways of making data more meaningful to end users include transforming standardized effects back to original scales and transforming odds ratios to absolute effects using an assumed baseline risk. For these methods to be valid, the scales or instruments being combined across studies need to have assessed the same or a similar construct
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.