Objective The objective was to identify barriers and facilitators to the implementation of artificial intelligence (AI) applications in clinical radiology in The Netherlands. Materials and methods Using an embedded multiple case study, an exploratory, qualitative research design was followed. Data collection consisted of 24 semi-structured interviews from seven Dutch hospitals. The analysis of barriers and facilitators was guided by the recently published Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework for new medical technologies in healthcare organizations. Results Among the most important facilitating factors for implementation were the following: (i) pressure for cost containment in the Dutch healthcare system, (ii) high expectations of AI’s potential added value, (iii) presence of hospital-wide innovation strategies, and (iv) presence of a “local champion.” Among the most prominent hindering factors were the following: (i) inconsistent technical performance of AI applications, (ii) unstructured implementation processes, (iii) uncertain added value for clinical practice of AI applications, and (iv) large variance in acceptance and trust of direct (the radiologists) and indirect (the referring clinicians) adopters. Conclusion In order for AI applications to contribute to the improvement of the quality and efficiency of clinical radiology, implementation processes need to be carried out in a structured manner, thereby providing evidence on the clinical added value of AI applications. Key Points • Successful implementation of AI in radiology requires collaboration between radiologists and referring clinicians. • Implementation of AI in radiology is facilitated by the presence of a local champion. • Evidence on the clinical added value of AI in radiology is needed for successful implementation.
This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but it also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.
Objectives Radiologists’ perception is likely to influence the adoption of artificial intelligence (AI) into clinical practice. We investigated knowledge and attitude towards AI by radiologists and residents in Europe and beyond. Methods Between April and July 2019, a survey on fear of replacement, knowledge, and attitude towards AI was accessible to radiologists and residents. The survey was distributed through several radiological societies, author networks, and social media. Independent predictors of fear of replacement and a positive attitude towards AI were assessed using multivariable logistic regression. Results The survey was completed by 1,041 respondents from 54 mostly European countries. Most respondents were male (n = 670, 65%), median age was 38 (24–74) years, n = 142 (35%) residents, and n = 471 (45%) worked in an academic center. Basic AI-specific knowledge was associated with fear (adjusted OR 1.56, 95% CI 1.10–2.21, p = 0.01), while intermediate AI-specific knowledge (adjusted OR 0.40, 95% CI 0.20–0.80, p = 0.01) or advanced AI-specific knowledge (adjusted OR 0.43, 95% CI 0.21–0.90, p = 0.03) was inversely associated with fear. A positive attitude towards AI was observed in 48% (n = 501) and was associated with only having heard of AI, intermediate (adjusted OR 11.65, 95% CI 4.25–31.92, p < 0.001), or advanced AI-specific knowledge (adjusted OR 17.65, 95% CI 6.16–50.54, p < 0.001). Conclusions Limited AI-specific knowledge levels among radiology residents and radiologists are associated with fear, while intermediate to advanced AI-specific knowledge levels are associated with a positive attitude towards AI. Additional training may therefore improve clinical adoption. Key Points • Forty-eight percent of radiologists and residents have an open and proactive attitude towards artificial intelligence (AI), while 38% fear of replacement by AI. • Intermediate and advanced AI-specific knowledge levels may enhance adoption of AI in clinical practice, while rudimentary knowledge levels appear to be inhibitive. • AI should be incorporated in radiology training curricula to help facilitate its clinical adoption.
This is a condensed summary of an international multisociety statement on ethics of artificial intelligence (AI) in radiology produced by the ACR, European Society of Radiology, RSNA, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists, and American Association of Physicists in Medicine. AI has great potential to increase efficiency and accuracy throughout radiology, but also carries inherent pitfalls and biases. Widespread use of AI-based intelligent and autonomous systems in radiology can increase the risk of systemic errors with high consequence, and highlights complex ethical and societal issues. Currently, there is little experience using AI for patient care in diverse clinical settings. Extensive research is needed to understand how to best deploy AI in clinical practice. This statement highlights our consensus that ethical use of AI in radiology should promote well-being, minimize harm, and ensure that the benefits and harms are distributed among stakeholders in a just manner. We believe AI should respect human rights and freedoms, including dignity and privacy. It should be designed for maximum transparency and dependability. Ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future. The radiology community should start now to develop codes of ethics and practice for AI which promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain without those two attributes.
The full version (Appendix E1 [online]) is posted on the web pages of each of these societies. Authors include society representatives, patient advocates, an American professor of philosophy, and attorneys with experience in radiology and privacy in the United States and the European Union. Artificial intelligence (AI), defined as computers that behave in ways that previously were thought to require human intelligence, has the potential to substantially improve radiology, help patients, and decrease cost (1). Radiologists are experts at acquiring information from medical images. AI can extend this expertise, extracting even more information to make better or entirely new predictions about patients. Going forward, conclusions about images will be made by human radiologists in conjunction with intelligent and autonomous machines. Although the machines will make mistakes, they are likely to make decisions more efficiently and with more consistency than humans and in some instances will contradict human radiologists and be proven to be correct. AI will affect image interpretation, report generation, result communication, and billing practice (1,2). AI has the potential to alter professional relationships, patient engagement, knowledge hierarchy, and the labor market. Additionally, AI may exacerbate the concentration and imbalance of resources, with entities that have significant AI resources having more "radiology decision-making" capabilities. Radiologists and radiology departments will also be data, categorized and evaluated by AI models. AI will infer patterns in personal, professional, and institutional behavior. The value, ownership, use of, and access to radiology data have taken on new meanings and significance in the era of AI. AI is complex and carries potential pitfalls and inherent biases. Widespread use of AI-based intelligent and autonomous machines in radiology can increase systemic risks of harm, raise the possibility of errors with high consequences, and amplify complex ethical and societal issues.
The coronavirus disease 2019 (COVID-19) pandemic is a global health care emergency. Although reverse-transcription polymerase chain reaction testing is the reference standard method to identify patients with COVID-19 infection, chest radiography and CT play a vital role in the detection and management of these patients. Prediction models for COVID-19 imaging are rapidly being developed to support medical decision making. However, inadequate availability of a diverse annotated data set has limited the performance and generalizability of existing models. To address this unmet need, the RSNA and Society of Thoracic Radiology collaborated to develop the RSNA International COVID-19 Open Radiology Database (RICORD). This database is the first multi-institutional, multinational, expert-annotated COVID-19 imaging data set. It is made freely available to the machine learning community as a research and educational resource for COVID-19 chest imaging. Pixel-level volumetric segmentation with clinical annotations was performed by thoracic radiology subspecialists for all COVID-19–positive thoracic CT scans. The labeling schema was coordinated with other international consensus panels and COVID-19 data annotation efforts, the European Society of Medical Imaging Informatics, the American College of Radiology, and the American Association of Physicists in Medicine. Study-level COVID-19 classification labels for chest radiographs were annotated by three radiologists, with majority vote adjudication by board-certified radiologists. RICORD consists of 240 thoracic CT scans and 1000 chest radiographs contributed from four international sites. It is anticipated that RICORD will ideally lead to prediction models that can demonstrate sustained performance across populations and health care systems. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Bai and Thomasian in this issue.
The growing use of social media is transforming the way health care professionals (HCPs) are communicating. In this changing environment, it could be useful to outline the usage of social media by radiologists in all its facets and on an international level. The main objective of the RANSOM survey was to investigate how radiologists are using social media and what is their attitude towards them. The second goal was to discern differences in tendencies among American and European radiologists. An international survey was launched on SurveyMonkey (https://www.surveymonkey.com) asking questions about the platforms they prefer, about the advantages, disadvantages, and risks, and about the main incentives and barriers to use social media. A total of 477 radiologists participated in the survey, of which 277 from Europe and 127 from North America. The results show that 85 % of all survey participants are using social media, mostly for a mixture of private and professional reasons. Facebook is the most popular platform for general purposes, whereas LinkedIn and Twitter are more popular for professional usage. The most important reason for not using social media is an unwillingness to mix private and professional matters.Eighty-two percent of all participants are aware of the educational opportunities offered by social media. The survey results underline the need to increase radiologists' skills in using social media efficiently and safely. There is also a need to create clear guidelines regarding the online and social media presence of radiologists to maximize the potential benefits of engaging with social media.
Objectives Currently, hurdles to implementation of artificial intelligence (AI) in radiology are a much-debated topic but have not been investigated in the community at large. Also, controversy exists if and to what extent AI should be incorporated into radiology residency programs. Methods Between April and July 2019, an international survey took place on AI regarding its impact on the profession and training. The survey was accessible for radiologists and residents and distributed through several radiological societies. Relationships of independent variables with opinions, hurdles, and education were assessed using multivariable logistic regression. Results The survey was completed by 1041 respondents from 54 countries. A majority (n = 855, 82%) expects that AI will cause a change to the radiology field within 10 years. Most frequently, expected roles of AI in clinical practice were second reader (n = 829, 78%) and work-flow optimization (n = 802, 77%). Ethical and legal issues (n = 630, 62%) and lack of knowledge (n = 584, 57%) were mentioned most often as hurdles to implementation. Expert respondents added lack of labelled images and generalizability issues. A majority (n = 819, 79%) indicated that AI should be incorporated in residency programs, while less support for imaging informatics and AI as a subspecialty was found (n = 241, 23%). Conclusions Broad community demand exists for incorporation of AI into residency programs. Based on the results of the current study, integration of AI education seems advisable for radiology residents, including issues related to data management, ethics, and legislation. Key Points • There is broad demand from the radiological community to incorporate AI into residency programs, but there is less support to recognize imaging informatics as a radiological subspecialty. • Ethical and legal issues and lack of knowledge are recognized as major bottlenecks for AI implementation by the radiological community, while the shortage in labeled data and IT-infrastructure issues are less often recognized as hurdles. • Integrating AI education in radiology curricula including technical aspects of data management, risk of bias, and ethical and legal issues may aid successful integration of AI into diagnostic radiology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.