Objectives The patients' view on the implementation of artificial intelligence (AI) in radiology is still mainly unexplored territory. The aim of this article is to develop and validate a standardized patient questionnaire on the implementation of AI in radiology. Methods Six domains derived from a previous qualitative study were used to develop a questionnaire, and cognitive interviews were used as pretest method. One hundred fifty-five patients scheduled for CT, MRI, and/or conventional radiography filled out the questionnaire. To find underlying latent variables, we used exploratory factor analysis with principal axis factoring and oblique promax rotation. Internal consistency of the factors was measured with Cronbach's alpha and composite reliability. Results The exploratory factor analysis revealed five factors on AI in radiology: (1) distrust and accountability (overall, patients were moderately negative on this subject), (2) procedural knowledge (patients generally indicated the need for their active engagement), (3) personal interaction (overall, patients preferred personal interaction), (4) efficiency (overall, patients were ambiguous on this subject), and (5) being informed (overall, scores on these items were not outspoken within this factor). Internal consistency was good for three factors (1, 2, and 3), and acceptable for two (4 and 5). Conclusions This study yielded a viable questionnaire to measure acceptance among patients of the implementation of AI in radiology. Additional data collection with confirmatory factor analysis may provide further refinement of the scale. Key Points • Although AI systems are increasingly developed, not much is known about patients' views on AI in radiology. • Since it is important that newly developed questionnaires are adequately tested and validated, we did so for a questionnaire measuring patients' views on AI in radiology, revealing five factors. • Successful implementation of AI in radiology requires assessment of social factors such as subjective norms towards the technology.
SUMMARYIn this paper we provide a model of interviewer-respondent interaction in survey interviews. Our model is primarily focused on the occurrence of problems within this interaction that seem likely to affect data quality. Both conversational principles and cognitive processes, especially where they do not match the requirements of the respondent's task, are assumed to affect the course of interactions. The cognitive processes involved in answering a survey question are usually described by means of four steps: interpretation, retrieval, judgement and formatting. Each of these steps may be responsible for different overt problems, such as requests for clarification or inadequate answers. Such problems are likely to affect the course of the interaction through conversational principles which may cause, for example, suggestive behaviour on the part of the interviewer, which may in turn yield new problematic behaviours. However, the respondent may not be the only one who experiences cognitive problems; the interviewer may also have such problems, for example with respect to explaining question meaning to the respondent. Thus the model proposed here, unlike most of the other models which concentrate on the respondent, tries to incorporate cognitive processes and conversational principles with respect to both interviewer and respondent. In particular, the model looks at how cognitive processes and conversational principles affect both the interaction between interview participants and the quality of the eventual answers.
Objective: To investigate the general population's view on the use of artificial intelligence (AI) for the diagnostic interpretation of screening mammograms.Methods: Dutch women aged 16 to 75 years were surveyed using the Longitudinal Internet Studies for the Social sciences panel, representative for the Dutch population. Attitude toward AI in mammography screening was measured by means of five items: necessity of a human check; AI as a selector for second reading; AI as a second reader; developer is responsible for error; and radiologist is responsible for error.Results: Of the 922 participants included, 77.8% agreed with the necessity of a human check, whereas the item AI as a selector for a second reading was more heterogeneously answered, with 41.7% disagreement, 31.5% agreement, and 26.9% responding with "neither agree nor disagree." The item AI as a second reader was mostly responded with "neither agree nor disagree" (37.1%) and "agree" (37.6%), whereas the two last items on developer's and radiologist' responsibilities were mostly answered with "neither agree nor disagree" (44.6% and 39.2%, respectively).Discussion: Despite recent breakthroughs in the diagnostic performance of AI algorithms for the interpretation of screening mammograms, the general population currently does not support a fully independent use of such systems without involving a radiologist. The combination of a radiologist as a first reader and an AI system as a second reader in a breast cancer screening program finds most support at present. Accountability in case of AI-related diagnostic errors in screening mammography is still an unresolved conundrum.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.