The rapid growth of human genetics creates countless opportunities for studies of disease association. Given the number of potentially identifiable genetic markers and the multitude of clinical outcomes to which these may be linked, the testing and validation of statistical hypotheses in genetic epidemiology is a task of unprecedented scale. Meta-analysis provides a quantitative approach for combining the results of various studies on the same topic, and for estimating and explaining their diversity. Here, we have evaluated by meta-analysis 370 studies addressing 36 genetic associations for various outcomes of disease. We show that significant between-study heterogeneity (diversity) is frequent, and that the results of the first study correlate only modestly with subsequent research on the same association. The first study often suggests a stronger genetic effect than is found by subsequent studies. Both bias and genuine population diversity might explain why early association studies tend to overestimate the disease protection or predisposition conferred by a genetic polymorphism. We conclude that a systematic meta-analytic approach may assist in estimating population-wide effects of genetic risk factors in human disease.
Research P ublication bias, the selective publication of studies based on whether results are "positive" or not, is a major threat to the validity of clinical research. [1][2][3][4] This bias can distort the totality of the available evidence on a research question, which leads to misleading inferences in reviews and meta-analyses. Without up-front study registration, however, this bias is difficult to identify after the fact.5 Many tests have therefore been proposed to help identify publication bias.
6The most common approaches try to investigate the presence of asymmetry in (inverted) funnel plots.7-10 A funnel plot shows the relation between study effect size and its precision. The premise is that small studies are more likely to remain unpublished if their results are nonsignificant or unfavourable, whereas larger studies get published regardless. This leads to funnel-plot asymmetry. Although visual inspection of funnel plots is unreliable, 11,12 statistical tests can be used to quantify the asymmetry.7-10 These tests have become popular: one relevant article 8 has been cited more than 1000 times.The limitations of these tests have been documented for some time. Begg and Mazumdar 7 mentioned in 1994 that the false-positive rates of their popular rank-correlation test were too low. In 2000, Sterne and colleagues 13 showed in a simulation study that the regression method described by Egger and associates 8 was more powerful than the rank correlation test, although the power of either method was low for meta-analyses of 10 or fewer trials. False-positive results were found to be a major concern in the presence of heterogeneity.13,14 To reduce the problem, a modified regression test was developed, 10 and several other tests proposed.6,15 Because they differ in their assumptions and statistical properties, discordant results can be expected with different tests.There are situations when the use of these tests is clearly inappropriate, and others where their use is futile or meaningless. Application of these tests with few studies is not wrong, but has low statistical power. Application in the presence of heterogeneity is more clearly inappropriate, and may lead to false-positive claims for publication bias.14,16,17 When all available studies are equally large (i.e., have similar precision), the tests are not meaningful. Finally, it makes no sense to evaluate whether studies with significant results are preferentially published when none with significant results have been published. Despite these limitations, these tests figure prominently in the medical literature. It would be useful to estimate how often these tests are appropriately or meaningfully applied. We therefore appraised almost 7000 meta-analyses in the Cochrane Database of Systematic Reviews to discover the extent to which tests of funnel-plot asymmetry would be inappropriate or nonconcordant. We also examined the appropriateness of the application of asymmetry testing in meta-analyses recently published in print journals. The appropriateness of asymm...
The R environment provides a natural platform for developing new statistical methods due to the mathematical expressiveness of the language, the large number of existing libraries, and the active developer community. One drawback to R, however, is the learning curve; programming is a deterrent to non-technical users, who typically prefer graphical user interfaces (GUIs) to command line environments. Thus, while statisticians develop new methods in R, practitioners are often behind in terms of the statistical techniques they use as they rely on GUI applications. Meta-analysis is an instructive example; cutting-edge meta-analysis methods are often ignored by the overwhelming majority of practitioners, in part because they have no easy way of applying them. This paper proposes a strategy to close the gap between the statistical state-of-the-science and what is applied in practice. We present open-source meta-analysis software that uses R as the underlying statistical engine, and Python for the GUI. We present a framework that allows methodologists to implement new methods in R that are then automatically integrated into the GUI for use by end-users, so long as the programmer conforms to our interface. Such an approach allows an intuitive interface for non-technical users while leveraging the latest advanced statistical methods implemented by methodologists.
A history of breastfeeding is associated with a reduced risk of many diseases in infants and mothers. Future research would benefit from clearer selection criteria, definitions of breastfeeding exposure, and adjustment for potential confounders. Matched designs such as sibling analysis may provide a method to control for hereditary and household factors that are important in certain outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.