NMSC treatments increased by 86% between 1997 and 2010. We anticipate that the number and the total cost without inflation of NMSC treatments will increase by a further 22% between 2010 and 2015. NMSC will remain the most costly cancer and place an increasing burden on the Australian health care system.
BackgroundRetaining participants in cohort studies with multiple follow-up waves is difficult. Commonly, researchers are faced with the problem of missing data, which may introduce biased results as well as a loss of statistical power and precision. The STROBE guidelines von Elm et al. (Lancet, 370:1453-1457, 2007); Vandenbroucke et al. (PLoS Med, 4:e297, 2007) and the guidelines proposed by Sterne et al. (BMJ, 338:b2393, 2009) recommend that cohort studies report on the amount of missing data, the reasons for non-participation and non-response, and the method used to handle missing data in the analyses. We have conducted a review of publications from cohort studies in order to document the reporting of missing data for exposure measures and to describe the statistical methods used to account for the missing data.MethodsA systematic search of English language papers published from January 2000 to December 2009 was carried out in PubMed. Prospective cohort studies with a sample size greater than 1,000 that analysed data using repeated measures of exposure were included.ResultsAmong the 82 papers meeting the inclusion criteria, only 35 (43%) reported the amount of missing data according to the suggested guidelines. Sixty-eight papers (83%) described how they dealt with missing data in the analysis. Most of the papers excluded participants with missing data and performed a complete-case analysis (n = 54, 66%). Other papers used more sophisticated methods including multiple imputation (n = 5) or fully Bayesian modeling (n = 1). Methods known to produce biased results were also used, for example, Last Observation Carried Forward (n = 7), the missing indicator method (n = 1), and mean value substitution (n = 3). For the remaining 14 papers, the method used to handle missing data in the analysis was not stated.ConclusionsThis review highlights the inconsistent reporting of missing data in cohort studies and the continuing use of inappropriate methods to handle missing data in the analysis. Epidemiological journals should invoke the STROBE guidelines as a framework for authors so that the amount of missing data and how this was accounted for in the analysis is transparent in the reporting of cohort studies.
Results from cohort studies of adult weight gain and risk of colorectal cancer are inconsistent. We conducted a systematic review and meta-analysis of prospective studies assessing the association of change in weight/body mass index with colorectal cancer risk. We searched Scopus and Web of Science up to June 2014 and supplemented the search with manual searches of the reference lists of the identified articles. Thirteen studies published between 1997 and 2014 were pooled by using a random-effects model, and potential heterogeneity was explored by fitting meta-regression models. The highest weight gain category, measured by weight/body mass index, compared with a reference category, was associated with increased risk of colorectal cancer (hazard ratio (HR) = 1.16, 95% confidence interval (CI): 1.08, 1.24), whereas no association was found for weight loss (HR = 0.96, 95% CI: 0.89, 1.05). There was no suggestion of heterogeneity across studies. For dose response, a 5-kg weight gain was associated with a slightly increased risk of colorectal cancer (HR = 1.03, 95% CI: 1.02, 1.05), with some heterogeneity observed (I(2) = 42%; P = 0.02), which was partially explained by sex (ratio of HRs = 1.03, 95% CI: 1.00, 1.07). In this meta-analysis, gain in weight/body mass index was positively associated with colorectal cancer risk.
Differences between arm‐based (AB) and contrast‐based (CB) models for network meta‐analysis (NMA) are controversial. We compare the CB model of Lu and Ades (2006), the AB model of Hong et al(2016), and two intermediate models, using hypothetical data and a selected real data set. Differences between models arise primarily from study intercepts being fixed effects in the Lu‐Ades model but random effects in the Hong model, and we identify four key difference. (1) If study intercepts are fixed effects then only within‐study information is used, but if they are random effects then between‐study information is also used and can cause important bias. (2) Models with random study intercepts are suitable for deriving a wider range of estimands, eg, the marginal risk difference, when underlying risk is derived from the NMA data; but underlying risk is usually best derived from external data, and then models with fixed intercepts are equally good. (3) The Hong model allows treatment effects to be related to study intercepts, but the Lu‐Ades model does not. (4) The Hong model is valid under a more relaxed missing data assumption, that arms (rather than contrasts) are missing at random, but this does not appear to reduce bias. We also describe an AB model with fixed study intercepts and a CB model with random study intercepts. We conclude that both AB and CB models are suitable for the analysis of NMA data, but using random study intercepts requires a strong rationale such as relating treatment effects to study intercepts.
Background
The Interrupted Time Series (ITS) is a quasi-experimental design commonly used in public health to evaluate the impact of interventions or exposures. Multiple statistical methods are available to analyse data from ITS studies, but no empirical investigation has examined how the different methods compare when applied to real-world datasets.
Methods
A random sample of 200 ITS studies identified in a previous methods review were included. Time series data from each of these studies was sought. Each dataset was re-analysed using six statistical methods. Point and confidence interval estimates for level and slope changes, standard errors, p-values and estimates of autocorrelation were compared between methods.
Results
From the 200 ITS studies, including 230 time series, 190 datasets were obtained. We found that the choice of statistical method can importantly affect the level and slope change point estimates, their standard errors, width of confidence intervals and p-values. Statistical significance (categorised at the 5% level) often differed across the pairwise comparisons of methods, ranging from 4 to 25% disagreement. Estimates of autocorrelation differed depending on the method used and the length of the series.
Conclusions
The choice of statistical method in ITS studies can lead to substantially different conclusions about the impact of the interruption. Pre-specification of the statistical method is encouraged, and naive conclusions based on statistical significance should be avoided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.