Background and aims-Controversy surrounds the risk of colorectal cancer (CRC) in ulcerative colitis (UC). Many studies have investigated this risk and reported widely varying rates. Methods-A literature search using Medline with the explosion of references identified 194 studies. Of these, 116 met our inclusion criteria from which the number of patients and cancers detected could be extracted. Overall pooled estimates, with 95% confidence intervals (CI), of cancer prevalence and incidence were obtained using a random eVects model on either the log odds or log incidence scale, as appropriate. Results-The overall prevalence of CRC in any UC patient, based on 116 studies, was estimated to be 3.7% (95% CI 3.2-4.2%). Of the 116 studies, 41 reported colitis duration. From these the overall incidence rate was 3/1000 person years duration (pyd), (95% CI 2/1000 to 4/1000). The overall incidence rate for any child was 6/1000 pyd (95% CI 3/1000 to 13/1000). Of the 41 studies, 19 reported results stratified into 10 year intervals of disease duration. For the first 10 years the incidence rate was 2/1000 pyd (95% CI 1/1000 to 2/1000), for the second decade the incidence rate was estimated to be 7/1000 pyd (95% CI 4/1000 to 12/1000), and in the third decade the incidence rate was 12/1000 pyd (95% CI 7/1000 to 19/1000). These incidence rates corresponded to cumulative probabilities of 2% by 10 years, 8% by 20 years, and 18% by 30 years. The worldwide cancer incidence rates varied geographically, being 5/1000 pyd in the USA, 4/1000 pyd in the UK, and 2/1000 pyd in Scandinavia and other countries. Over time the cancer risk has increased since 1955 but this finding was not significant (p=0.8). Conclusions-Using new meta-analysis techniques we determined the risk of CRC in UC by decade of disease and defined the risk in pancolitics and children. We found a non-significant increase in risk over time and estimated how risk varies with geography. (Gut 2001;48:526-535)
Because of appropriate type I error rates and reduction in the correlation between the lnOR and its variance, the alternative regression test can be used in place of Egger's regression test when the summary estimates are lnORs.
Objective To assess the effect of publication bias on the results and conclusions of systematic reviews and meta-analyses. Design Analysis of published meta-analyses by trim and fill method. Studies 48 reviews in Cochrane Database of Systematic Reviews that considered a binary endpoint and contained 10 or more individual studies. Main outcome measures Number of reviews with missing studies and effect on conclusions of meta-analyses. Results The trim and fill fixed effects analysis method estimated that 26 (54%) of reviews had missing studies and in 10 the number missing was significant. The corresponding figures with a random effects model were 23 (48%) and eight. In four cases, statistical inferences regarding the effect of the intervention were changed after the overall estimate for publication bias was adjusted for. Conclusions Publication or related biases were common within the sample of meta-analyses assessed. In most cases these biases did not affect the conclusions. Nevertheless, researchers should check routinely whether conclusions of systematic reviews are robust to possible non-random selection mechanisms.
In the second article in the PROGRESS series on prognostic factor research, Sara Schroter and colleagues discuss the role of prognostic factors in current clinical practice, randomised trials, and developing new interventions, and explain why and how prognostic factor research should be improved.
Standard methods for indirect comparisons and network meta-analysis are based on aggregate data, with the key assumption that there is no difference between the trials in the distribution of effect-modifying variables. Methods which relax this assumption are becoming increasingly common for submissions to reimbursement agencies, such as the National Institute for Health and Care Excellence (NICE). These methods use individual patient data from a subset of trials to form population-adjusted indirect comparisons between treatments, in a specific target population. Recently proposed population adjustment methods include the Matching-Adjusted Indirect Comparison (MAIC) and the Simulated Treatment Comparison (STC). Despite increasing popularity, MAIC and STC remain largely untested. Furthermore, there is a lack of clarity about exactly how and when they should be applied in practice, and even whether the results are relevant to the decision problem. There is therefore a real and present risk that the assumptions being made in one submission to a reimbursement agency are fundamentally different to—or even incompatible with—the assumptions being made in another for the same indication. We describe the assumptions required for population-adjusted indirect comparisons, and demonstrate how these may be used to generate comparisons in any given target population. We distinguish between anchored and unanchored comparisons according to whether a common comparator arm is used or not. Unanchored comparisons make much stronger assumptions, which are widely regarded as infeasible. We provide recommendations on how and when population adjustment methods should be used, and the supporting analyses that are required to provide statistically valid, clinically meaningful, transparent and consistent results for the purposes of health technology appraisal. Simulation studies are needed to examine the properties of population adjustment methods and their robustness to breakdown of assumptions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.