Funnel plots, and tests for funnel plot asymmetry, have been widely used to examine bias in the results of meta-analyses. Funnel plot asymmetry should not be equated with publication bias, because it has a number of other possible causes. This article describes how to interpret funnel plot asymmetry, recommends appropriate tests, and explains the implications for choice of meta-analysis model This article recommends how to examine and interpret funnel plot asymmetry (also known as small study effects 2 ) in meta-analyses of randomised controlled trials. The recommendations are based on a detailed MEDLINE review of literature published up to 2007 and discussions among methodologists, who extended and adapted guidance previously summarised in the Cochrane Handbook for Systematic Reviews of Interventions. 7 What is a funnel plot?A funnel plot is a scatter plot of the effect estimates from individual studies against some measure of each study's size or precision. The standard error of the effect estimate is often chosen as the measure of study size and plotted on the vertical axis 8 with a reversed scale that places the larger, most powerful studies towards the top. The effect estimates from smaller studies should scatter more widely at the bottom, with the spread narrowing among larger studies. 9 In the absence of bias and between study heterogeneity, the scatter will be due to sampling variation alone and the plot will resemble a symmetrical inverted funnel (fig 1). A triangle centred on a fixed effect summary estimate and extending 1.96 standard errors either side willCorrespondence to: J A C Sterne jonathan.sterne@bristol.ac.ukTechnical appendix (see
ObjectiveMeta-analysis is of fundamental importance to obtain an unbiased assessment of the available evidence. In general, the use of meta-analysis has been increasing over the last three decades with mental health as a major research topic. It is then essential to well understand its methodology and interpret its results. In this publication, we describe how to perform a meta-analysis with the freely available statistical software environment R, using a working example taken from the field of mental health.MethodsR package meta is used to conduct standard meta-analysis. Sensitivity analyses for missing binary outcome data and potential selection bias are conducted with R package metasens. All essential R commands are provided and clearly described to conduct and report analyses.ResultsThe working example considers a binary outcome: we show how to conduct a fixed effect and random effects meta-analysis and subgroup analysis, produce a forest and funnel plot and to test and adjust for funnel plot asymmetry. All these steps work similar for other outcome types.ConclusionsR represents a powerful and flexible tool to conduct meta-analyses. This publication gives a brief glimpse into the topic and provides directions to more advanced meta-analysis methods available in R.
BackgroundNetwork meta-analysis is used to compare three or more treatments for the same condition. Within a Bayesian framework, for each treatment the probability of being best, or, more general, the probability that it has a certain rank can be derived from the posterior distributions of all treatments. The treatments can then be ranked by the surface under the cumulative ranking curve (SUCRA). For comparing treatments in a network meta-analysis, we propose a frequentist analogue to SUCRA which we call P-score that works without resampling.MethodsP-scores are based solely on the point estimates and standard errors of the frequentist network meta-analysis estimates under normality assumption and can easily be calculated as means of one-sided p-values. They measure the mean extent of certainty that a treatment is better than the competing treatments.ResultsUsing case studies of network meta-analysis in diabetes and depression, we demonstrate that the numerical values of SUCRA and P-Score are nearly identical.ConclusionsRanking treatments in frequentist network meta-analysis works without resampling. Like the SUCRA values, P-scores induce a ranking of all treatments that mostly follows that of the point estimates, but takes precision into account. However, neither SUCRA nor P-score offer a major advantage compared to looking at credible or confidence intervals.Electronic supplementary materialThe online version of this article (doi:10.1186/s12874-015-0060-8) contains supplementary material, which is available to authorized users.
Background: The heterogeneity statistic I 2 , interpreted as the percentage of variability due to heterogeneity between studies rather than sampling error, depends on precision, that is, the size of the studies included.
. Objectives. Low‐dose aspirin given for secondary prevention of cardiovascular disease is frequently withdrawn prior to surgical or diagnostic procedures to reduce bleeding complications. This may expose patients to increased cardiovascular morbidity and mortality. Aim of the study was to review and quantify cardiovascular risks because of periprocedural aspirin withdrawal and bleeding risks with the continuation of aspirin. Methods. We screened MEDLINE (January 1970–October 2004) with additional manual cross‐referencing for clinical studies, surveys on the opinions of doctors and guidelines. Results. Studies reporting the relative risk of acute cardiovascular events after aspirin withdrawal when compared with its continuation were not found. However, retrospective investigations revealed that aspirin withdrawal precedes up to 10.2% of acute cardiovascular syndromes. The time interval between discontinuation and acute cerebral events was 14.3 ± 11.3 days, 8.5 ± 3.6 days for acute coronary syndromes, and 25.8 ± 18.1 days for acute peripheral arterial syndromes (P < 0.02 versus acute coronary syndromes). On aspirin‐related bleeding risks, we obtained 41 (12 observational retrospective, 19 observational prospective, 10 randomized) studies, reporting on 49 590 patients (14 981 on aspirin). Baseline frequency of bleeding complications varied between 0 (skin lesion excision, cataract surgery) and 75% (transrectal prostate biopsy). Whilst aspirin increased the rate of bleeding complications by factor 1.5 (median, interquartile range: 1.0–2.5), it did not lead to a higher level of the severity of bleeding complications (exception: intracranial surgery, and possibly transurethral prostatectomy). Surveys amongst doctors on the management of this problem demonstrate wide variations. Available guidelines are scarce and in part contradictory. Conclusions. Only if low‐dose aspirin may cause bleeding risks with increased mortality or sequels comparable with the observed cardiovascular risks after aspirin withdrawal, it should be discontinued prior to an intended operation or procedure. Controlled clinical studies are urgently needed.
Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd.
y-Randomization is a tool used in validation of QSPR/QSAR models, whereby the performance of the original model in data description (r2) is compared to that of models built for permuted (randomly shuffled) response, based on the original descriptor pool and the original model building procedure. We compared y-randomization and several variants thereof, using original response, permuted response, or random number pseudoresponse and original descriptors or random number pseudodescriptors, in the typical setting of multilinear regression (MLR) with descriptor selection. For each combination of number of observations (compounds), number of descriptors in the final model, and number of descriptors in the pool to select from, computer experiments using the same descriptor selection method result in two different mean highest random r2 values. A lower one is produced by y-randomization or a variant likewise based on the original descriptors, while a higher one is obtained from variants that use random number pseudodescriptors. The difference is due to the intercorrelation of real descriptors in the pool. We propose to compare an original model's r2 to both of these whenever possible. The meaning of the three possible outcomes of such a double test is discussed. Often y-randomization is not available to a potential user of a model, due to the values of all descriptors in the pool for all compounds not being published. In such cases random number experiments as proposed here are still possible. The test was applied to several recently published MLR QSAR equations, and cases of failure were identified. Some progress also is reported toward the aim of obtaining the mean highest r2 of random pseudomodels by calculation rather than by tedious multiple simulations on random number variables.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.