Clinical trials consume the latter half of the 10 to 15 year, 1.5-2.0 billion USD, development cycle for bringing a single new drug to market. Hence, a failed trial sinks not only the investment into the trial itself but also the preclinical development costs, rendering the loss per failed clinical trial at 800 million to 1.4 billion USD. Suboptimal patient cohort selection and recruiting techniques, paired with the inability to monitor patients effectively during trials, are two of the main causes for high trial failure rates: only one of 10 compounds entering a clinical trial reaches the market. We explain how recent advances in artificial intelligence (AI) can be used to reshape key steps of clinical trial design towards increasing trial success rates.
Future of clinical development is on the verge of a major transformation due to convergence of large new digital data sources, computing power to identify clinically meaningful patterns in the data using efficient artificial intelligence and machine-learning algorithms, and regulators embracing this change through new collaborations. This perspective summarizes insights, recent developments, and recommendations for infusing actionable computational evidence into clinical development and health care from academy, biotechnology industry, nonprofit foundations, regulators, and technology corporations. Analysis and learning from publically available biomedical and clinical trial data sets, real-world evidence from sensors, and health records by machine-learning architectures are discussed. Strategies for modernizing the clinical development process by integration of AI- and ML-based digital methods and secure computing technologies through recently announced regulatory pathways at the United States Food and Drug Administration are outlined. We conclude by discussing applications and impact of digital algorithmic evidence to improve medical care for patients.
The recent years have witnessed a surge of interests in data analytics with patient Electronic Health Records (EHR). Data-driven healthcare, which aims at effective utilization of big medical data, representing the collective learning in treating hundreds of millions of patients, to provide the best and most personalized care, is believed to be one of the most promising directions for transforming healthcare. EHR is one of the major carriers for make this data-driven healthcare revolution successful. There are many challenges on working directly with EHR, such as temporality, sparsity, noisiness, bias, etc. Thus effective feature extraction, or phenotyping from patient EHRs is a key step before any further applications. In this paper, we propose a deep learning approach for phenotyping from patient EHRs. We first represent the EHRs for every patient as a temporal matrix with time on one dimension and event on the other dimension. Then we build a fourlayer convolutional neural network model for extracting phenotypes and perform prediction. The first layer is composed of those EHR matrices. The second layer is a one-side convolution layer that can extract phenotypes from the first layer. The third layer is a max pooling layer introducing sparsity on the detected phenotypes, so that only those significant phenotypes will remain. The fourth layer is a fully connected softmax prediction layer. In order to incorporate the temporal smoothness of the patient EHR, we also investigated three different temporal fusion mechanisms in the model: early fusion, late fusion and slow fusion. Finally the proposed model is validated on a real world EHR data warehouse under the specific scenario of predictive modeling of chronic diseases.
Drug-drug interaction (DDI) is an important topic for public health, and thus attracts attention from both academia and industry. Here we hypothesize that clinical side effects (SEs) provide a human phenotypic profile and can be translated into the development of computational models for predicting adverse DDIs. We propose an integrative label propagation framework to predict DDIs by integrating SEs extracted from package inserts of prescription drugs, SEs extracted from FDA Adverse Event Reporting System, and chemical structures from PubChem. Experimental results based on hold-out validation demonstrated the effectiveness of the proposed algorithm. In addition, the new algorithm also ranked drug information sources based on their contributions to the prediction, thus not only confirming that SEs are important features for DDI prediction but also paving the way for building more reliable DDI prediction models by prioritizing multiple data sources. By applying the proposed algorithm to 1,626 small-molecule drugs which have one or more SE profiles, we obtained 145,068 predicted DDIs. The predicted DDIs will help clinicians to avoid hazardous drug interactions in their prescriptions and will aid pharmaceutical companies to design large-scale clinical trial by assessing potentially hazardous drug combinations. All data sets and predicted DDIs are available at http://astro.temple.edu/~tua87106/ddi.html.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.