Purpose The expanded use of clinical tools that incorporate artificial intelligence (AI) methods has generated calls for specific competencies for effective and ethical use. This qualitative study used expert interviews to define AI-related clinical competencies for health care professionals. Method In 2021, a multidisciplinary team interviewed 15 experts in the use of AI-based tools in health care settings about the clinical competencies health care professionals need to work effectively with such tools. Transcripts of the semistructured interviews were coded and thematically analyzed. Draft competency statements were developed and provided to the experts for feedback. The competencies were finalized using a consensus process across the research team. Results Six competency domain statements and 25 subcompetencies were formulated from the thematic analysis. The competency domain statements are: (1) basic knowledge of AI: explain what AI is and describe its health care applications; (2) social and ethical implications of AI: explain how social, economic, and political systems influence AI-based tools and how these relationships impact justice, equity, and ethics; (3) AI-enhanced clinical encounters: carry out AI-enhanced clinical encounters that integrate diverse sources of information in creating patient-centered care plans; (4) evidence-based evaluation of AI-based tools: evaluate the quality, accuracy, safety, contextual appropriateness, and biases of AI-based tools and their underlying data sets in providing care to patients and populations; (5) workflow analysis for AI-based tools: analyze and adapt to changes in teams, roles, responsibilities, and workflows resulting from implementation of AI-based tools; and (6) practice-based learning and improvement regarding AI-based tools: participate in continuing professional development and practice-based improvement activities related to use of AI tools in health care. Conclusions The 6 clinical competencies identified can be used to guide future teaching and learning programs to maximize the potential benefits of AI-based tools and diminish potential harms.
Background The use of artificial intelligence (AI)–based tools in the care of individual patients and patient populations is rapidly expanding. Objective The aim of this paper is to systematically identify research on provider competencies needed for the use of AI in clinical settings. Methods A scoping review was conducted to identify articles published between January 1, 2009, and May 1, 2020, from MEDLINE, CINAHL, and the Cochrane Library databases, using search queries for terms related to health care professionals (eg, medical, nursing, and pharmacy) and their professional development in all phases of clinical education, AI-based tools in all settings of clinical practice, and professional education domains of competencies and performance. Limits were provided for English language, studies on humans with abstracts, and settings in the United States. Results The searches identified 3476 records, of which 4 met the inclusion criteria. These studies described the use of AI in clinical practice and measured at least one aspect of clinician competence. While many studies measured the performance of the AI-based tool, only 4 measured clinician performance in terms of the knowledge, skills, or attitudes needed to understand and effectively use the new tools being tested. These 4 articles primarily focused on the ability of AI to enhance patient care and clinical decision-making by improving information flow and display, specifically for physicians. Conclusions While many research studies were identified that investigate the potential effectiveness of using AI technologies in health care, very few address specific competencies that are needed by clinicians to use them effectively. This highlights a critical gap.
IMPORTANCEEffective methods for engaging clinicians in continuing education for learning-based practice improvement remain unknown. OBJECTIVE To determine whether a smartphone-based app using spaced education with retrieval practice is an effective method to increase evidence-based practice. DESIGN, SETTING, AND PARTICIPANTSA prospective, unblinded, single-center, crossover randomized clinical trial was conducted at a single academic medical center from January 6 to April 24, 2020. Vanderbilt University Medical Center clinicians prescribing intravenous fluids were invited to participate in this study. INTERVENTIONSAll clinicians received two 4-week education modules: 1 on prescribing intravenous fluids and 1 on prescribing opioid and nonopioid medications (counterbalancing measure), over a 12-week period. The order of delivery was randomized 1:1 such that 1 group received the fluid management module first, followed by the pain management module after a 4-week break, and the other group received the pain management module first, followed by the fluid management module after a 4-week break. MAIN OUTCOMES AND MEASURESThe primary outcome was evidence-based clinician prescribing behavior concerning intravenous fluids in the inpatient setting and pain medication prescribing on discharge from the hospital. RESULTS A total of 354 participants were enrolled and randomized, with 177 in group 1 (fluid then pain management education) and 177 in group 2 (pain management then fluid education). During the overall study period, 16 868 questions were sent to 349 learners, with 11 783 (70.0%) being opened: 10 885 (92.4%) of those opened were answered and 7175 (65.9%) of those answered were answered correctly. The differences between groups changed significantly over time, indicated by the significant interaction between educational intervention and time (P = .002). Briefly, at baseline evidence-concordant IV fluid ordered 7.2% less frequently in group 1 than group 2 (95% CI, −19.2% to 4.9%). This was reversed after training at 4% higher (95% CI, −8.2% to 16.0%) in group 1 than group 2, a more than doubling in the odds of evidence-concordant ordering (OR, 2.56, 95% CI, 0.80-8.21). Postintervention, all gains had been reversed with less frequent ordering in group 1 than group 2 (−9.5%, 95% CI, −21.6% to 2.7%). There was no measurable change in opioid prescribing behaviors at any time point.
Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.