Background: There has been significant increase in the development of Artificial intelligence (AI) for clinical decision support. Historically these were mostly knowledge-based systems, but recent advances include non-knowledge-based systems using some form of machine learning. The ability of healthcare professionals to trust technology and understand how it benefits patients or improves care delivery is known to be important for their adoption of that technology. For non-knowledge-based AI for clinical decision support, these issues are poorly understood.Objective: To qualitatively synthesise evidence on the experiences of healthcare professionals in routinely using non-knowledgebased AI to support their clinical decision-making.
Methods:In June 2023 we searched four electronic databases: MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINHAL) and Web of Science with no language or date limit. We also contacted relevant experts and searched reference lists of included studies. We included studies of any design which reported the experiences of healthcare professionals using non-knowledge-based systems for clinical decision support in their work settings. We completed double independent quality assessment for all included studies using the Mixed Methods Appraisal Tool (MMAT). We used a theoretically informed thematic approach to synthesise the findings.Results: After screening 7,552 titles and 182 full-text articles, we included 25 studies conducted in nine different countries. Most of the included studies were qualitative (n=14) and the remaining were quantitative (n=7) and mixed methods studies (n=4). Overall, we identified seven themes: (i) Understanding of AI applications; (ii) Level of trust and confidence in AI tools; (iii) Judging the added value of AI; (iv) Data availability and limitations of AI; (v) Time and competing priorities; (vi) Concern about governance; (vii) Collaboration to facilitate the implementation and use of AI. The most frequently occurring of these are the first three themes. For example, many studies reported that healthcare professionals were concerned about not understanding the AI outputs or the rationale behind them. There were issues with confidence in the accuracy and recommendations by the AI applications. Some healthcare professionals believed that AI provided added value and improved decision-making, some reported that it only served as a confirmation of their clinical judgment, while others did not find it useful at all. Conclusions: Our review identified several important issues documented in various studies on healthcare professionals' use of AI in real-world healthcare settings. Opinions of healthcare professionals regarding the added value of AI for supporting clinical decision making varied widely, and many professionals have concerns about their understanding of, and trust in this technology. The findings of this review emphasise the need for concerted efforts to optimise the integration of AI in real-world healthcare settings. Clinical Trial:...