Machine learning (ML) utilizes artificial intelligence to generate predictive models efficiently and more effectively than conventional methods through detection of hidden patterns within large data sets. With this in mind, there are several areas within hepatology where these methods can be applied. In this review, we examine the literature pertaining to machine learning in hepatology and liver transplant medicine. We provide an overview of the strengths and limitations of ML tools and their potential applications to both clinical and molecular data in hepatology. ML has been applied to various types of data in liver disease research, including clinical, demographic, molecular, radiological, and pathological data. We anticipate that use of ML tools to generate predictive algorithms will change the face of clinical practice in hepatology and transplantation. This review will provide readers with the opportunity to learn about the ML tools available and potential applications to questions of interest in hepatology.
Cardiovascular disease (CVD) significantly contributes to morbidity and mortality after liver transplantation (LT). Cirrhotic cardiomyopathy (CCM) is a risk factor for CVD after transplant. CCM criteria were originally introduced in 2005 with a revision proposed in 2020 reflecting echocardiographic technology advancements. This study assesses the two criteria sets in predicting major adverse cardiac events (MACE) after transplant. This single‐center retrospective study reviewed adult LT recipients between January 1, 2009, and December 31, 2018. Patients with insufficient pre‐LT echocardiographic data, prior ischemic heart disease, portopulmonary hypertension, or longitudinal care elsewhere were excluded. The primary composite outcome was MACE (arrhythmia, heart failure, cardiac arrest, and/or cardiac death) after transplant. Of 1165 patients, 210 met the eligibility criteria. CCM was present in 162 patients (77%) per the original criteria and 64 patients (30%) per the revised criteria. There were 44 MACE and 31 deaths in the study period. Of the deaths, 38.7% occurred secondary to CVD. CCM defined by the original criteria was not associated with MACE after LT (p = 0.21), but the revised definition was significantly associated with MACE (hazard ratio [HR], 1.93; 95% confidence interval, 1.05–3.56; p = 0.04) on multivariable analysis. Echocardiographic variable analysis demonstrated low septal e’ as the most predictive variable for MACE after LT (HR, 3.45; p < 0.001). CCM, only when defined by the revised criteria, was associated with increased risk for MACE after LT, validating the recently revised CCM definition. Abnormal septal e’, reflecting impaired relaxation, appears to be the most predictive echocardiographic criterion for MACE after LT.
Solid-organ transplantation is a life-saving treatment for end-stage organ disease in highly selected patients. Alongside the tremendous progress in the last several decades, new challenges have emerged. The growing disparity between organ demand and supply requires optimal patient/donor selection and matching. Improvements in long-term graft and patient survival require data-driven diagnosis and management of post-transplant complications. The growing abundance of clinical, genetic, radiologic, and metabolic data in transplantation has led to increasing interest in applying machine-learning (ML) tools that can uncover hidden patterns in large datasets. ML algorithms have been applied in predictive modeling of waitlist mortality, donor–recipient matching, survival prediction, post-transplant complications diagnosis, and prediction, aiming to optimize immunosuppression and management. In this review, we provide insight into the various applications of ML in transplant medicine, why these were used to evaluate a specific clinical question, and the potential of ML to transform the care of transplant recipients. 36 articles were selected after a comprehensive search of the following databases: Ovid MEDLINE; Ovid MEDLINE Epub Ahead of Print and In-Process & Other Non-Indexed Citations; Ovid Embase; Cochrane Database of Systematic Reviews (Ovid); and Cochrane Central Register of Controlled Trials (Ovid). In summary, these studies showed that ML techniques hold great potential to improve the outcome of transplant recipients. Future work is required to improve the interpretability of these algorithms, ensure generalizability through larger-scale external validation, and establishment of infrastructure to permit clinical integration.
Background Severe renal dysfunction is common among liver transplant (LT) candidates and often prompts simultaneous liver‐kidney transplantation (SLKT) consideration. In view of 2017 United Network of Organ Sharing (UNOS) criteria for SLKT, we investigated the likelihood and predictors of renal recovery among patients who met the aforementioned criteria yet received liver transplant alone (LTA). Methods We retrospectively analyzed relative renal recovery (RRR; increase in eGFR to >30 ml/min) in adult LTA recipients between 1/2009 and 1/2019. Results Of 1165 LT recipients, 54 met 2017 UNOS criteria, with 37 receiving LTA. RRR occurred in 84% of LTA recipients, none of whom had pre‐LT eGFR <20 ml/min. Sustained RRR (>180 days) occurred in 43% of patients. While prolonged pre‐LT severe renal impairment (eGFR <30 ml/min) predicted failure to have sustained RRR (HR .19 per 90‐day, CI .04–.87, p < .005), having an eGFR measurement of >30 ml/min within 90 days pre‐LT (HR 5.52, CI 1.23–24.79, p .01) associated with achieving sustained RRR. Sustained RRR was protective against the composite outcome of renal replacement therapy, kidney transplant, and death (HR .21, p .01). Conclusion LT candidates who meet 2017 UNOS criteria for SLKT yet undergo LTA can still have relative renal recovery post‐LT, exceeding 80% on short‐term follow‐up and 40% on long‐term follow‐up. eGFR trends within 90 days pre‐LT can predict sustained renal recovery, which appears protective of adverse outcomes. These recovery rates advocate for applying the more restrictive criteria for SLKT outlined in this article and increasing utilization of the safety net (SN) policy for those who do not meet the proposed criteria.
Background: Although guidelines recommend primary care–driven management of NAFLD, workflow constraints hinder feasibility. Leveraging electronic health records to risk stratify patients proposes a scalable, workflow-integrated strategy. Materials and Methods: We prospectively evaluated an electronic health record-embedded clinical decision support system’s ability to risk stratify patients with NAFLD and detect gaps in care. Patients missing annual laboratory testing to calculate Fibrosis-4 Score (FIB-4) or those missing necessary linkage to further care were considered to have a gap in care. Linkage to care was defined as either referral for elastography-based testing or for consultation in hepatology clinic depending on clinical and biochemical characteristics. Results: Patients with NAFLD often lacked annual screening labs within primary care settings (1129/2154; 52%). Linkage to care was low in all categories, with <3% of patients with abnormal FIB-4 undergoing further evaluation. Discussion: Significant care gaps exist within primary care for screening and risk stratification of patients with NAFLD and can be efficiently addressed using electronic health record functionality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.