Artificial intelligence (AI) can transform health care practices with its increasing ability to translate the uncertainty and complexity in data into actionable—though imperfect—clinical decisions or suggestions. In the evolving relationship between humans and AI, trust is the one mechanism that shapes clinicians’ use and adoption of AI. Trust is a psychological mechanism to deal with the uncertainty between what is known and unknown. Several research studies have highlighted the need for improving AI-based systems and enhancing their capabilities to help clinicians. However, assessing the magnitude and impact of human trust on AI technology demands substantial attention. Will a clinician trust an AI-based system? What are the factors that influence human trust in AI? Can trust in AI be optimized to improve decision-making processes? In this paper, we focus on clinicians as the primary users of AI systems in health care and present factors shaping trust between clinicians and AI. We highlight critical challenges related to trust that should be considered during the development of any AI system for clinical use.
Background Artificial intelligence (AI) provides opportunities to identify the health risks of patients and thus influence patient safety outcomes. Objective The purpose of this systematic literature review was to identify and analyze quantitative studies utilizing or integrating AI to address and report clinical-level patient safety outcomes. Methods We restricted our search to the PubMed, PubMed Central, and Web of Science databases to retrieve research articles published in English between January 2009 and August 2019. We focused on quantitative studies that reported positive, negative, or intermediate changes in patient safety outcomes using AI apps, specifically those based on machine-learning algorithms and natural language processing. Quantitative studies reporting only AI performance but not its influence on patient safety outcomes were excluded from further review. Results We identified 53 eligible studies, which were summarized concerning their patient safety subcategories, the most frequently used AI, and reported performance metrics. Recognized safety subcategories were clinical alarms (n=9; mainly based on decision tree models), clinical reports (n=21; based on support vector machine models), and drug safety (n=23; mainly based on decision tree models). Analysis of these 53 studies also identified two essential findings: (1) the lack of a standardized benchmark and (2) heterogeneity in AI reporting. Conclusions This systematic review indicates that AI-enabled decision support systems, when implemented correctly, can aid in enhancing patient safety by improving error detection, patient stratification, and drug management. Future work is still needed for robust validation of these systems in prospective and real-world clinical environments to understand how well AI can predict safety outcomes in health care settings.
Objectives Geriatric clinical care is a multidisciplinary assessment designed to evaluate older patients’ (age 65 years and above) functional ability, physical health, and cognitive well-being. The majority of these patients suffer from multiple chronic conditions and require special attention. Recently, hospitals utilize various artificial intelligence (AI) systems to improve care for elderly patients. The purpose of this systematic literature review is to understand the current use of AI systems, particularly machine learning (ML), in geriatric clinical care for chronic diseases. Materials and Methods We restricted our search to eight databases, namely PubMed, WorldCat, MEDLINE, ProQuest, ScienceDirect, SpringerLink, Wiley, and ERIC, to analyze research articles published in English between January 2010 and June 2019. We focused on studies that used ML algorithms in the care of geriatrics patients with chronic conditions. Results We identified 35 eligible studies and classified in three groups: psychological disorder (n = 22), eye diseases (n = 6), and others (n = 7). This review identified the lack of standardized ML evaluation metrics and the need for data governance specific to health care applications. Conclusion More studies and ML standardization tailored to health care applications are required to confirm whether ML could aid in improving geriatric clinical care.
Background Despite advancements in artificial intelligence (AI) to develop prediction and classification models, little research has been devoted to real-world translations with a user-centered design approach. AI development studies in the health care context have often ignored two critical factors of ecological validity and human cognition, creating challenges at the interface with clinicians and the clinical environment. Objective The aim of this literature review was to investigate the contributions made by major human factors communities in health care AI applications. This review also discusses emerging research gaps, and provides future research directions to facilitate a safer and user-centered integration of AI into the clinical workflow. Methods We performed an extensive mapping review to capture all relevant articles published within the last 10 years in the major human factors journals and conference proceedings listed in the “Human Factors and Ergonomics” category of the Scopus Master List. In each published volume, we searched for studies reporting qualitative or quantitative findings in the context of AI in health care. Studies are discussed based on the key principles such as evaluating workload, usability, trust in technology, perception, and user-centered design. Results Forty-eight articles were included in the final review. Most of the studies emphasized user perception, the usability of AI-based devices or technologies, cognitive workload, and user’s trust in AI. The review revealed a nascent but growing body of literature focusing on augmenting health care AI; however, little effort has been made to ensure ecological validity with user-centered design approaches. Moreover, few studies (n=5 against clinical/baseline standards, n=5 against clinicians) compared their AI models against a standard measure. Conclusions Human factors researchers should actively be part of efforts in AI design and implementation, as well as dynamic assessments of AI systems’ effects on interaction, workflow, and patient outcomes. An AI system is part of a greater sociotechnical system. Investigators with human factors and ergonomics expertise are essential when defining the dynamic interaction of AI within each element, process, and result of the work system.
The onset of COVID-19 has escalated healthcare workers’ psychological distress. Multiple factors, including prolonged exposure to COVID-19 patients, irregular working hours, and workload, have substantially contributed to stress and burnout among healthcare workers. To explore the impact of COVID-19 on healthcare workers, our study compares the job stress, social support, and intention to leave the job among healthcare workers working in a pandemic (HP) and a non-pandemic hospital (HNP) in Turkey during the pandemic. The cross-sectional, paper-based survey involved 403 healthcare workers including physicians, registered nurses, health technicians, and auxiliary staff across two hospitals from 1 September 2020 to 31 November 2020. The findings indicate a significant impact of ‘Job stress’ on ‘Intent to leave’ job among participants in the HP. We noted that ‘intent to leave’ and ‘job stress’ were significantly higher among the HP healthcare workers than those working in the HNP, respectively. However, workers’ ‘social support’ was significantly lower in the HP. Healthcare workers, during COVID-19, face several hurdles such as job stress, reduced social support, and excessive workload, all of which are potential factors influencing a care provider’s intent to leave the job.
With the emergence of the Hospital Readmission Reduction Program of the Center for Medicare and Medicaid Services on October 1, 2012, forecasting unplanned patient readmission risk became crucial to the healthcare domain. There are tangible works in the literature emphasizing on developing readmission risk prediction models; However, the models are not accurate enough to be deployed in an actual clinical setting. Our study considers patient readmission risk as the objective for optimization and develops a useful risk prediction model to address unplanned readmissions. Furthermore, Genetic Algorithm and Greedy Ensemble is used to optimize the developed model constraints.
Autism spectrum condition (ASC) or autism spectrum disorder (ASD) is primarily identified with the help of behavioral indications encompassing social, sensory and motor characteristics. Although categorized, recurring motor actions are measured during diagnosis, quantifiable measures that ascertain kinematic physiognomies in the movement configurations of autistic persons are not adequately studied, hindering the advances in understanding the etiology of motor mutilation. Subject aspects such as behavioral characters that influences ASD need further exploration. Presently, limited autism datasets concomitant with screening ASD are available, and a majority of them are genetic. Hence, in this study, we used a dataset related to autism screening enveloping ten behavioral and ten personal attributes that have been effective in diagnosing ASD cases from controls in behavior science. ASD diagnosis is time exhaustive and uneconomical. The burgeoning ASD cases worldwide mandate a need for the fast and economical screening tool. Our study aimed to implement an artificial neural network with the Levenberg-Marquardt algorithm to detect ASD and examine its predictive accuracy. Consecutively, develop a clinical decision support system for early ASD identification.
BACKGROUND Hepatitis E virus (HEV) infection is underdiagnosed due to the use of serological assays with low sensitivity. Although most patients with HEV recover completely, HEV infection among patients with pre-existing chronic liver disease and organ-transplant recipients on immunosuppressive therapy can result in decompensated liver disease and death. AIM To demonstrate the prevalence of HEV infection in solid organ transplant (SOT) recipients. METHODS We searched Ovid MEDLINE, EMBASE, and the Cochrane Library for eligible articles through October 2020. The inclusion criteria consisted of adult patients with history of SOT. HEV infection is confirmed by either HEV-immunoglobulin G, HEV-immunoglobulin M, or HEV RNA assay. RESULTS Of 563 citations, a total of 22 studies ( n = 4557) were included in this meta-analysis. The pooled estimated prevalence of HEV infection in SOT patients was 20.2% [95% confidence interval (CI): 14.9-26.8]. The pooled estimated prevalence of HEV infection for each organ transplant was as follows: liver (27.2%; 95%CI: 20.0-35.8), kidney (12.8%; 95%CI: 9.3-17.3), heart (12.8%; 95%CI: 9.3-17.3), and lung (5.6%; 95%CI: 1.6-17.9). Comparison across organ transplants demonstrated statistical significance (Q = 16.721, P = 0.002). The subgroup analyses showed that the prevalence of HEV infection among SOT recipients was significantly higher in middle-income countries compared to high-income countries. The pooled estimated prevalence of de novo HEV infection was 5.1% (95%CI: 2.6-9.6) and the pooled estimated prevalence of acute HEV infection was 4.3% (95%CI: 1.9-9.4). CONCLUSION HEV infection is common in SOT recipients, particularly in middle-income countries. The prevalence of HEV infection in lung transplant recipients is considerably less common than other organ transplants. More studies examining the clinical impacts of HEV infection in SOT recipients, such as graft failure, rejection, and mortality are warranted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.