Background Several polygenic risk scores (PRS) have been developed for cardiovascular risk prediction, but the additive value of including PRS together with conventional risk factors for risk prediction is questionable. This study assesses the clinical utility of including four PRS generated from 194, 46K, 1.5M, and 6M SNPs, along with conventional risk factors, to predict risk of ischemic heart disease (IHD), myocardial infarction (MI), and first MI event on or before age 50 (early MI). Methods A cross-validated logistic regression (LR) algorithm was trained either on ~ 440K European ancestry individuals from the UK Biobank (UKB), or the full UKB population, including as features different combinations of conventional established-at-birth risk factors (ancestry, sex) and risk factors that are non-fixed over an individual’s lifespan (age, BMI, hypertension, hyperlipidemia, diabetes, smoking, family history), with and without also including PRS. The algorithm was trained separately with IHD, MI, and early MI as prediction labels. Results When LR was trained using risk factors established-at-birth, adding the four PRS significantly improved the area under the curve (AUC) for IHD (0.62 to 0.67) and MI (0.67 to 0.73), as well as for early MI (0.70 to 0.79). When LR was trained using all risk factors, adding the four PRS only resulted in a significantly higher disease prevalence in the 98th and 99th percentiles of both the IHD and MI scores. Conclusions PRS improve cardiovascular risk stratification early in life when knowledge of later-life risk factors is unavailable. However, by middle age, when many risk factors are known, the improvement attributed to PRS is marginal for the general population.
Modern drug discovery efforts have had mediocre success rates with increasing developmental costs, and this has encouraged pharmaceutical scientists to seek innovative approaches. Recently with the rise of the fields of systems biology and metabolomics, network pharmacology (NP) has begun to emerge as a new paradigm in drug discovery, with a focus on multiple targets and drug combinations for treating disease. Studies on the benefits of drug combinations lay the groundwork for a renewed focus on natural products in drug discovery. Natural products consist of a multitude of constituents that can act on a variety of targets in the body to induce pharmacodynamic responses that may together culminate in an additive or synergistic therapeutic effect. Although natural products cannot be patented, they can be used as starting points in the discovery of potent combination therapeutics. The optimal mix of bioactive ingredients in natural products can be determined via phenotypic screening. The targets and molecular mechanisms of action of these active ingredients can then be determined using chemical proteomics, and by implementing a reverse pharmacokinetics approach. This review article provides evidence supporting the potential benefits of natural product-based combination drugs, and summarizes drug discovery methods that can be applied to this class of drugs.
Abstract:The in utero environment plays an essential role in shaping future growth and development. Psychological distress during pregnancy has been shown to perturb the delicate physiological milieu of pregnancy, and has been associated with negative repercussions in the offspring, including adverse birth outcomes, long-term defects in cognitive development, behavioral problems during childhood and high baseline levels of stress-related hormones. Fetal epigenetic programming, involving epigenetic processes, may help explain the link between maternal prenatal stress and its negative effects on the child. Given the potential long-term effects of early-life stress on a child's health, it is crucial to minimize maternal distress during pregnancy. A number of recent studies have examined the usefulness of mindfulness-based programs to reduce prenatal psychological stress and improve maternal psychological health, and these are reviewed here. Overall, the findings are promising, but more research is needed with large studies using randomized controlled study designs. It remains unclear whether or not such interventions could also improve child health outcomes, and whether these changes are modulated at the epigenetic level during fetal development. Further studies in this area are needed.
Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made a significant clinical impact, especially in diagnosis and disease precision staging. One major cause for such low impact is the lack of model transparency, significantly limiting the AI adoption in real clinical practice. To solve this problem, AI models need to be explained to users. Thus, we have conducted a comprehensive study of Explainable Artificial Intelligence (XAI) using PRISMA technology. Our findings suggest that XAI can improve model performance, instill trust in the users, and assist users in decisionmaking. In this systematic review, we introduce common XAI techniques and their utility with specific examples of their application. We discuss the evaluation of XAI results because it is an important step for maximizing the value of AI-based clinical decision support systems. Additionally, we present the traditional, modern, and advanced XAI models to demonstrate the evolution of novel techniques. Finally, we provide a best practice guideline that developers can refer to during the model experimentation. We also offer potential solutions with specific examples for common challenges in AI model experimentation. This comprehensive review, hopefully, can promote AI adoption in biomedicine and healthcare.
Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made significant clinical impact. The impact of artificial intelligence during the COVID-19 pandemic was greatly limited by lack of model transparency. This systematic review examines the use of Explainable Artificial Intelligence (XAI) during the pandemic and how its use could overcome barriers to real-world success. We find that successful use of XAI can improve model performance, instill trust in the end-user, and provide the value needed to affect user decision-making. We introduce the reader to common XAI techniques, their utility, and specific examples of their application. Evaluation of XAI results is also discussed as an important step to maximize the value of AI-based clinical decision support systems. We illustrate the classical, modern, and potential future trends of XAI to elucidate the evolution of novel XAI techniques. Finally, we provide a checklist of suggestions during the experimental design process supported by recent publications. Common challenges during the implementation of AI solutions are also addressed with specific examples of potential solutions. We hope this review may serve as a guide to improve the clinical impact of future AI-based solutions.
At the beginning of the COVID-19 pandemic, there was significant hype about the potential impact of artificial intelligence (AI) tools in combatting COVID-19 on diagnosis, prognosis, or surveillance. However, AI tools have not yet been widely successful. One of the key reason is the COVID-19 pandemic has demanded faster real-time development of AI-driven clinical and health support tools, including rapid data collection, algorithm development, validation, and deployment. However, there was not enough time for proper data quality control. Learning from the hard lessons in COVID-19, we summarize the important health data quality challenges during COVID-19 pandemic such as lack of data standardization, missing data, tabulation errors, and noise and artifact. Then we conduct a systematic investigation of computational methods that address these issues, including emerging novel advanced AI data quality control methods that achieve better data quality outcomes and, in some cases, simplify or automate the data cleaning process. We hope this article can assist healthcare community to improve health data quality going forward with novel AI development.
Genome-wide association studies (GWAS) have significantly advanced our understanding of the genetic underpinnings of diseases, but case and control cohort definitions for a given disease can vary between different published studies. For example, two GWAS for the same disease using the UK Biobank data set might use different data sources (i.e., self-reported questionnaires, hospital records, etc.) or different levels of granularity (i.e., specificity of inclusion criteria) to define cases and controls. The extent to which this variability in cohort definitions impacts the end-results of a GWAS study is unclear. In this study, we systematically evaluated the effect of the data sources used for case and control definitions on GWAS findings. Using the UK Biobank, we selected three diseases-glaucoma, migraine, and iron-deficiency anemia. For each disease, we designed 13 GWAS, each using different combinations of data sources to define cases and controls, and then calculated the pairwise genetic correlations between all GWAS for each disease. We found that the data sources used to define cases for a given disease can have a significant impact on GWAS end-results, but the extent of this depends heavily on the disease in question. This suggests the need for greater scrutiny on how case cohorts are defined for GWAS.
COVID-19 causes significant morbidity and mortality and early intervention is key to minimizing deadly complications. Available treatments, such as monoclonal antibody therapy, may limit complications, but only when given soon after symptom onset. Unfortunately, these treatments are often expensive, in limited supply, require administration within a hospital setting, and should be given before the onset of severe symptoms. These challenges have created the need for early triage of patients likely to develop life-threatening complications. To meet this need, we developed an automated patient risk assessment model using a real-world hospital system dataset with over 17,000 COVIDpositive patients. Specifically, for each COVID-positive patient, we generate a separate risk score for each of four clinical outcomes including death within 30 days, mechanical ventilator use, ICU admission, and any catastrophic event (a superset of dangerous outcomes). We hypothesized that a deep learning binary classification approach can generate these four risk scores from electronic healthcare records data at the time of diagnosis. Our approach achieves significant performance on the four tasks with an area under receiver operating curve (AUROC) for any catastrophic outcome, death within 30 days, ventilator use, and ICU admission of 86.7%, 88.2%, 86.2%, and 87.8%, respectively. In addition, we visualize the sensitivity and specificity of these risk scores to allow clinicians to customize their usage within different clinical outcomes. We believe this work fulfills a clear clinical need for early detection of objective clinical outcomes and can be used for early screening for treatment intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.