Abstract:The acceptance of artificial intelligence (AI) systems by health professionals is crucial to obtain a positive impact on the diagnosis pathway. We evaluated user satisfaction with an AI system for the automated detection of findings in chest x-rays, after five months of use at the Emergency Department. We collected quantitative and qualitative data to analyze the main aspects of user satisfaction, following the Technology Acceptance Model. We selected the intended users of the system as study participants: rad… Show more
“…Although a staged approach to implementation and evaluation was evident in many studies (e.g., [48,66]), only three tracked actual use of systems by clinicians [28,29,75]. Evaluation of user experience was mostly confined to assessing satisfaction via surveys.…”
Section: Discussionmentioning
confidence: 99%
“…Nine studies examined systems for a variety of clinical areas in hospital and outpatient radiology departments. Taking a theory driven approach, Rabinovich et al [28] used the Technology Acceptance Model to evaluate user satisfaction and actual use of an assistive system for chest x-ray interpretation in an Argentinian emergency department (ED) over 5-months. The system was used for 15% of studies (n=1,186), with an average of eight accesses per day.…”
Aims and objectives: To examine the nature and use of automation in contemporary clinical information systems by reviewing studies reporting the implementation and evaluation of artificial intelligence (AI) technologies in healthcare settings.
Method: PubMed/MEDLINE, Web of Science, EMBASE, the tables of contents of major informatics journals, and the bibliographies of articles were searched for studies reporting evaluation of AI in clinical settings from January 2021 to December 2022. We documented the clinical application areas and tasks supported, and the level of system autonomy. Reported effects on user experience, decision-making, care delivery and outcomes were summarised.
Results: AI technologies are being applied in a wide variety of clinical areas. Most contemporary systems utilise deep learning, use routinely collected data, support diagnosis and triage, are assistive (requiring users to confirm or approve AI provided information or decisions), and are used by doctors in acute care settings in high-income nations. AI systems are integrated and used within existing clinical information systems including electronic medical records. There is limited support for One Health goals. Evaluation is largely based on quantitative methods measuring effects on decision-making.
Conclusion: AI systems are being implemented and evaluated in many clinical areas. There remain many opportunities to understand patterns of routine use and evaluate effects on decision-making, care delivery and patient outcomes using mixed-methods. Support for One Health including integrating data about environmental factors and social determinants needs further exploration.
“…Although a staged approach to implementation and evaluation was evident in many studies (e.g., [48,66]), only three tracked actual use of systems by clinicians [28,29,75]. Evaluation of user experience was mostly confined to assessing satisfaction via surveys.…”
Section: Discussionmentioning
confidence: 99%
“…Nine studies examined systems for a variety of clinical areas in hospital and outpatient radiology departments. Taking a theory driven approach, Rabinovich et al [28] used the Technology Acceptance Model to evaluate user satisfaction and actual use of an assistive system for chest x-ray interpretation in an Argentinian emergency department (ED) over 5-months. The system was used for 15% of studies (n=1,186), with an average of eight accesses per day.…”
Aims and objectives: To examine the nature and use of automation in contemporary clinical information systems by reviewing studies reporting the implementation and evaluation of artificial intelligence (AI) technologies in healthcare settings.
Method: PubMed/MEDLINE, Web of Science, EMBASE, the tables of contents of major informatics journals, and the bibliographies of articles were searched for studies reporting evaluation of AI in clinical settings from January 2021 to December 2022. We documented the clinical application areas and tasks supported, and the level of system autonomy. Reported effects on user experience, decision-making, care delivery and outcomes were summarised.
Results: AI technologies are being applied in a wide variety of clinical areas. Most contemporary systems utilise deep learning, use routinely collected data, support diagnosis and triage, are assistive (requiring users to confirm or approve AI provided information or decisions), and are used by doctors in acute care settings in high-income nations. AI systems are integrated and used within existing clinical information systems including electronic medical records. There is limited support for One Health goals. Evaluation is largely based on quantitative methods measuring effects on decision-making.
Conclusion: AI systems are being implemented and evaluated in many clinical areas. There remain many opportunities to understand patterns of routine use and evaluate effects on decision-making, care delivery and patient outcomes using mixed-methods. Support for One Health including integrating data about environmental factors and social determinants needs further exploration.
“…7 Comparison to previous qualitative studies (emergency clinicians) Previous qualitative interview-based studies of emergency clinicians' attitudes towards AI have had a smaller number of participants and mostly focused on attitudes towards specific AI-based tools, such as detecting pathology in chest X-rays, diagnosing aortic dissection, or predicting 30-day mortality. [30][31][32][33] In these studies, attitudes towards AI were generally positive, with clinicians viewing AI as a tool to supplement clinical expertise and help inexperienced clinicians. [30][31][32][33] However, there were also concerns that AI could bias clinical decisions, and that inexperienced clinicians could become over-reliant on such tools.…”
Section: Discussionmentioning
confidence: 99%
“…[30][31][32][33] In these studies, attitudes towards AI were generally positive, with clinicians viewing AI as a tool to supplement clinical expertise and help inexperienced clinicians. [30][31][32][33] However, there were also concerns that AI could bias clinical decisions, and that inexperienced clinicians could become over-reliant on such tools. [31][32][33] Other concerns included the trustworthiness of AI systems, alarm fatigue, the medicolegal risk of documenting AI-based predictions, the impact on clinician autonomy, and the multiple human factors that could be overlooked by AI.…”
Section: Discussionmentioning
confidence: 99%
“…Previous qualitative interview‐based studies of emergency clinicians' attitudes towards AI have had a smaller number of participants and mostly focused on attitudes towards specific AI‐based tools, such as detecting pathology in chest X‐rays, diagnosing aortic dissection, or predicting 30‐day mortality 30–33 . In these studies, attitudes towards AI were generally positive, with clinicians viewing AI as a tool to supplement clinical expertise and help inexperienced clinicians 30–33 . However, there were also concerns that AI could bias clinical decisions, and that inexperienced clinicians could become over‐reliant on such tools 31–33 .…”
ObjectiveTo assess Australian and New Zealand emergency clinicians' attitudes towards the use of artificial intelligence (AI) in emergency medicine.MethodsWe undertook a qualitative interview‐based study based on grounded theory. Participants were recruited through ED internal mailing lists, the Australasian College for Emergency Medicine Bulletin, and the research teams' personal networks. Interviews were transcribed, coded and themes presented.ResultsTwenty‐five interviews were conducted between July 2021 and May 2022. Thematic saturation was achieved after 22 interviews. Most participants were from either Western Australia (52%) or Victoria (16%) and were consultants (96%). More participants reported feeling optimistic (10/25) than neutral (6/25), pessimistic (2/25) or mixed (7/25) towards the use of AI in the ED. A minority expressed scepticism regarding the feasibility or value of implementing AI into the ED. Multiple potential risks and ethical issues were discussed by participants including skill loss from overreliance on AI, algorithmic bias, patient privacy and concerns over liability. Participants also discussed perceived inadequacies in existing information technology systems. Participants felt that AI technologies would be used as decision support tools and not replace the roles of emergency clinicians. Participants were not concerned about the impact of AI on their job security. Most (17/25) participants thought that AI would impact emergency medicine within the next 10 years.ConclusionsEmergency clinicians interviewed were generally optimistic about the use of AI in emergency medicine, so long as it is used as a decision support tool and they maintain the ability to override its recommendations.
Background: There has been significant increase in the development of Artificial intelligence (AI) for clinical decision support. Historically these were mostly knowledge-based systems, but recent advances include non-knowledge-based systems using some form of machine learning. The ability of healthcare professionals to trust technology and understand how it benefits patients or improves care delivery is known to be important for their adoption of that technology. For non-knowledge-based AI for clinical decision support, these issues are poorly understood.Objective: To qualitatively synthesise evidence on the experiences of healthcare professionals in routinely using non-knowledgebased AI to support their clinical decision-making.
Methods:In June 2023 we searched four electronic databases: MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature (CINHAL) and Web of Science with no language or date limit. We also contacted relevant experts and searched reference lists of included studies. We included studies of any design which reported the experiences of healthcare professionals using non-knowledge-based systems for clinical decision support in their work settings. We completed double independent quality assessment for all included studies using the Mixed Methods Appraisal Tool (MMAT). We used a theoretically informed thematic approach to synthesise the findings.Results: After screening 7,552 titles and 182 full-text articles, we included 25 studies conducted in nine different countries. Most of the included studies were qualitative (n=14) and the remaining were quantitative (n=7) and mixed methods studies (n=4). Overall, we identified seven themes: (i) Understanding of AI applications; (ii) Level of trust and confidence in AI tools; (iii) Judging the added value of AI; (iv) Data availability and limitations of AI; (v) Time and competing priorities; (vi) Concern about governance; (vii) Collaboration to facilitate the implementation and use of AI. The most frequently occurring of these are the first three themes. For example, many studies reported that healthcare professionals were concerned about not understanding the AI outputs or the rationale behind them. There were issues with confidence in the accuracy and recommendations by the AI applications. Some healthcare professionals believed that AI provided added value and improved decision-making, some reported that it only served as a confirmation of their clinical judgment, while others did not find it useful at all. Conclusions: Our review identified several important issues documented in various studies on healthcare professionals' use of AI in real-world healthcare settings. Opinions of healthcare professionals regarding the added value of AI for supporting clinical decision making varied widely, and many professionals have concerns about their understanding of, and trust in this technology. The findings of this review emphasise the need for concerted efforts to optimise the integration of AI in real-world healthcare settings. Clinical Trial:...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.