Purpose To date there has not been an extensive analysis of the outcomes of biomarker use in oncology. Methods Data were pooled across four indications in oncology drawing upon trial outcomes from http://www.clinicaltrials.gov: breast cancer, non‐small cell lung cancer (NSCLC), melanoma and colorectal cancer from 1998 to 2017. We compared the likelihood drugs would progress through the stages of clinical trial testing to approval based on biomarker status. This was done with multi‐state Markov models, tools that describe the stochastic process in which subjects move among a finite number of states. Results Over 10000 trials were screened, which yielded 745 drugs. The inclusion of biomarker status as a covariate significantly improved the fit of the Markov model in describing the drug trajectories through clinical trial testing stages. Hazard ratios based on the Markov models revealed the likelihood of drug approval with biomarkers having nearly a fivefold increase for all indications combined. A 12, 8 and 7‐fold hazard ratio was observed for breast cancer, melanoma and NSCLC, respectively. Markov models with exploratory biomarkers outperformed Markov models with no biomarkers. Conclusion This is the first systematic statistical evidence that biomarkers clearly increase clinical trial success rates in three different indications in oncology. Also, exploratory biomarkers, long before they are properly validated, appear to improve success rates in oncology. This supports early and aggressive adoption of biomarkers in oncology clinical trials.
BackgroundDeploying safe and effective machine learning models is essential to realize the promise of artificial intelligence for improved healthcare. Yet, there remains a large gap between the number of high-performing ML models trained on healthcare data and the actual deployment of these models. Here, we describe the deployment of CHARTwatch, an artificial intelligence-based early warning system designed to predict patient risk of clinical deterioration.MethodsWe describe the end-to-end infrastructure that was developed to deploy CHARTwatch and outline the process from data extraction to communicating patient risk scores in real-time to physicians and nurses. We then describe the various challenges that were faced in deployment, including technical issues (e.g., unstable database connections), process-related challenges (e.g., changes in how a critical lab is measured), and challenges related to deploying a clinical system in the middle of a pandemic. We report various measures to quantify the success of the deployment: model performance, adherence to workflows, and infrastructure uptime/downtime. Ultimately, success is driven by end-user adoption and impact on relevant clinical outcomes. We assess our deployment process by evaluating how closely we followed existing guidance for good machine learning practice (GMLP) and identify gaps that are not addressed in this guidance.ResultsThe model demonstrated strong and consistent performance in real-time in the first 19 months after deployment (AUC 0.76) as in the silent deployment heldout test data (AUC 0.79). The infrastructure remained online for >99% of time in the first year of deployment. Our deployment adhered to all 10 aspects of GMLP guiding principles. Several steps were crucial for deployment but are not mentioned or are missing details in the GMLP principles, including the need for a silent testing period, the creation of robust downtime protocols, and the importance of end-user engagement. Evaluation for impacts on clinical outcomes and adherence to clinical protocols is underway.ConclusionWe deployed an artificial intelligence-based early warning system to predict clinical deterioration in hospital. Careful attention to data infrastructure, identifying problems in a silent testing period, close monitoring during deployment, and strong engagement with end-users were critical for successful deployment.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.