Clinical prediction models (CPMs) can inform decision making about treatment initiation, which requires predicted risks assuming no treatment is given. However, this is challenging since CPMs are usually derived using data sets where patients received treatment, often initiated postbaseline as “treatment drop‐ins.” This study proposes the use of marginal structural models (MSMs) to adjust for treatment drop‐in. We illustrate the use of MSMs in the CPM framework through simulation studies that represent randomized controlled trials and real‐world observational data and the example of statin initiation for cardiovascular disease prevention. The simulations include a binary treatment and a covariate, each recorded at two timepoints and having a prognostic effect on a binary outcome. The bias in predicted risk was examined in a model ignoring treatment, a model fitted on treatment‐naïve patients (at baseline), a model including baseline treatment, and the MSM. In all simulation scenarios, all models except the MSM underestimated the risk of outcome given absence of treatment. These results were supported in the statin initiation example, which showed that ignoring statin initiation postbaseline resulted in models that significantly underestimated the risk of a cardiovascular disease event occurring within 10 years. Consequently, CPMs that do not acknowledge treatment drop‐in can lead to underallocation of treatment. In conclusion, when developing CPMs to predict treatment‐naïve risk, researchers should consider using MSMs to adjust for treatment drop‐in, and also seek to exploit the ability of MSMs to allow estimation of individual treatment effects.
The objective of this study was to assess the reliability of individual risk predictions based on routinely collected data considering the heterogeneity between clinical sites in data and populations. Cardiovascular disease (CVD) risk prediction with QRISK3 was used as exemplar. The study included 3.6 million patients in 392 sites from the Clinical Practice Research Datalink. Cox models with QRISK3 predictors and a frailty (random effect) term for each site were used to incorporate unmeasured site variability. There was considerable variation in data recording between general practices (missingness of body mass index ranged from 18.7% to 60.1%). Incidence rates varied considerably between practices (from 0.4 to 1.3 CVD events per 100 patient-years). Individual CVD risk predictions with the random effect model were inconsistent with the QRISK3 predictions. For patients with QRISK3 predicted risk of 10%, the 95% range of predicted risks were between 7.2% and 13.7% with the random effects model. Random variability only explained a small part of this. The random effects model was equivalent to QRISK3 for discrimination and calibration. Risk prediction models based on routinely collected health data perform well for populations but with great uncertainty for individuals. Clinicians and patients need to understand this uncertainty.
BackgroundThe Cohort Multiple Randomised Controlled Trial (cmRCT) is a newly proposed pragmatic trial design; recently several cmRCT have been initiated. This study tests the unresolved question of whether differential refusal in the intervention arm leads to bias or loss of statistical power and how to deal with this.MethodsWe conduct simulations evaluating a hypothetical cluster cmRCT in patients at risk of cardiovascular disease (CVD). To deal with refusal, we compare the analysis methods intention to treat (ITT), per protocol (PP) and two instrumental variable (IV) methods: two stage predictor substitution (2SPS) and two stage residual inclusion (2SRI) with respect to their bias and power. We vary the correlation between treatment refusal probability and the probability of experiencing the outcome to create different scenarios.ResultsWe found ITT to be biased in all scenarios, PP the most biased when correlation is strong and 2SRI the least biased on average. Trials suffer a drop in power unless the refusal rate is factored into the power calculation.ConclusionsThe ITT effect in routine practice is likely to lie somewhere between the ITT and IV estimates from the trial which differ significantly depending on refusal rates. More research is needed on how refusal rates of experimental interventions correlate with refusal rates in routine practice to help answer the question of which analysis more relevant. We also recommend updating the required sample size during the trial as more information about the refusal rate is gained.
Background Risk prediction models are commonly used in practice to inform decisions on patients’ treatment. Uncertainty around risk scores beyond the confidence interval is rarely explored. We conducted an uncertainty analysis of the QRISK prediction tool to evaluate the robustness of individual risk predictions with varying modelling decisions. Methods We derived a cohort of patients eligible for cardiovascular risk prediction from the Clinical Practice Research Datalink (CPRD) with linked hospitalisation and mortality records ( N = 3,792,474). Risk prediction models were developed using the methods reported for QRISK2 and 3, before adjusting for additional risk factors, a secular trend, geographical variation in risk and the method for imputing missing data when generating a risk score (model A–model F). Ten-year risk scores were compared across the different models alongside model performance metrics. Results We found substantial variation in risk on the individual level across the models. The 95 percentile range of risks in model F for patients with risks between 9 and 10% according to model A was 4.4–16.3% and 4.6–15.8% for females and males respectively. Despite this, the models were difficult to distinguish using common performance metrics (Harrell’s C ranged from 0.86 to 0.87). The largest contributing factor to variation in risk was adjusting for a secular trend (HR per calendar year, 0.96 [0.95–0.96] and 0.96 [0.96–0.96]). When extrapolating to the UK population, we found that 3.8 million patients may be reclassified as eligible for statin prescription depending on the model used. A key limitation of this study was that we could not assess the variation in risk that may be caused by risk factors missing from the database (such as diet or physical activity). Conclusions Risk prediction models that use routinely collected data provide estimates strongly dependent on modelling decisions. Despite this large variability in patient risk, the models appear to perform similarly according to standard performance metrics. Decision-making should be supplemented with clinical judgement and evidence of additional risk factors. The largest source of variability, a secular trend in CVD incidence, can be accounted for and should be explored in more detail. Electronic supplementary material The online version of this article (10.1186/s12916-019-1368-8) contains supplementary material, which is available to authorized users.
IntroductionTraditional phase IIIb randomised trials may not reflect routine clinical practice. The Salford Lung Study in chronic obstructive pulmonary disease (SLS COPD) allowed broad inclusion criteria and followed patients in routine practice. We assessed whether SLS COPD approximated the England COPD population and evidence for a Hawthorne effect.MethodsThis observational cohort study compared patients with COPD in the usual care arm of SLS COPD (2012–2014) with matched non-trial patients with COPD in England from the Clinical Practice Research Datalink database. Generalisability was explored with baseline demographics, clinical and treatment variables; outcomes included COPD exacerbations in adjusted models and pretrial versus peritrial comparisons.ResultsTrial participants were younger (mean, 66.7 vs 71.1 years), more deprived (most deprived quintile, 51.5% vs 21.4%), more current smokers (47.5% vs 32.1%), with more severe Global initiative for chronic Obstructive Lung Disease stages but less comorbidity than non-trial patients. There were no material differences in other characteristics. Acute COPD exacerbation rates were high in the trial population (98.37th percentile).ConclusionThe trial population was similar to the non-trial COPD population. We observed some evidence of a Hawthorne effect, with more exacerbations recorded in trial patients; however, the largest effect was observed through behavioural changes in patients and general practitioner coding practices.
BackgroundThe cohort multiple randomised controlled trial (cmRCT) design provides an opportunity to incorporate the benefits of randomisation within clinical practice; thus reducing costs, integrating electronic healthcare records, and improving external validity. This study aims to address a key concern of the cmRCT design: refusal to treatment is only present in the intervention arm, and this may lead to bias and reduce statistical power.MethodsWe used simulation studies to assess the effect of this refusal, both random and related to event risk, on bias of the effect estimator and statistical power. A series of simulations were undertaken that represent a cmRCT trial with time-to-event endpoint. Intention-to-treat (ITT), per protocol (PP), and instrumental variable (IV) analysis methods, two stage predictor substitution and two stage residual inclusion, were compared for various refusal scenarios.ResultsWe found the IV methods provide a less biased estimator for the causal effect when refusal is present in the intervention arm, with the two stage residual inclusion method performing best with regards to minimum bias and sufficient power. We demonstrate that sample sizes should be adapted based on expected and actual refusal rates in order to be sufficiently powered for IV analysis.ConclusionWe recommend running both an IV and ITT analyses in an individually randomised cmRCT as it is expected that the effect size of interest, or the effect we would observe in clinical practice, would lie somewhere between that estimated with ITT and IV analyses. The optimum (in terms of bias and power) instrumental variable method was the two stage residual inclusion method. We recommend using adaptive power calculations, updating them as refusal rates are collected in the trial recruitment phase in order to be sufficiently powered for IV analysis.
Background Stability of risk estimates from prediction models may be highly dependent on the sample size of the dataset available for model derivation. In this paper, we evaluate the stability of cardiovascular disease risk scores for individual patients when using different sample sizes for model derivation; such sample sizes include those similar to models recommended in the national guidelines, and those based on recently published sample size formula for prediction models. Methods We mimicked the process of sampling N patients from a population to develop a risk prediction model by sampling patients from the Clinical Practice Research Datalink. A cardiovascular disease risk prediction model was developed on this sample and used to generate risk scores for an independent cohort of patients. This process was repeated 1000 times, giving a distribution of risks for each patient. N = 100,000, 50,000, 10,000, Nmin (derived from sample size formula) and Nepv10 (meets 10 events per predictor rule) were considered. The 5–95th percentile range of risks across these models was used to evaluate instability. Patients were grouped by a risk derived from a model developed on the entire population (population-derived risk) to summarise results. Results For a sample size of 100,000, the median 5–95th percentile range of risks for patients across the 1000 models was 0.77%, 1.60%, 2.42% and 3.22% for patients with population-derived risks of 4–5%, 9–10%, 14–15% and 19–20% respectively; for N = 10,000, it was 2.49%, 5.23%, 7.92% and 10.59%, and for N using the formula-derived sample size, it was 6.79%, 14.41%, 21.89% and 29.21%. Restricting this analysis to models with high discrimination, good calibration or small mean absolute prediction error reduced the percentile range, but high levels of instability remained. Conclusions Widely used cardiovascular disease risk prediction models suffer from high levels of instability induced by sampling variation. Many models will also suffer from overfitting (a closely linked concept), but at acceptable levels of overfitting, there may still be high levels of instability in individual risk. Stability of risk estimates should be a criterion when determining the minimum sample size to develop models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.