2021
DOI: 10.1007/s00330-021-07879-w
|View full text |Cite
|
Sign up to set email alerts
|

Radiologists in the loop: the roles of radiologists in the development of AI applications

Abstract: Objectives To examine the various roles of radiologists in different steps of developing artificial intelligence (AI) applications. Materials and methods Through the case study of eight companies active in developing AI applications for radiology, in different regions (Europe, Asia, and North America), we conducted 17 semi-structured interviews and collected data from documents. Based on systematic thematic analysis, we identified various roles of radiolog… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 8 publications
0
9
0
Order By: Relevance
“…Oakden-Rayner's critique contains important epistemological questions that deserve consideration: questions about how comparisons can be made (especially between algorithms and human experts), and how data is labelled (who labels the data, who inspects the data and whether experts with relevant clinical experience are considered). Labels and codes or criteria for comparing performances come to matter greatly when it comes to validation because they are based on the so-called 'ground truth' of features that the algorithm has learned in the training data -the labels, annotations, or codes in this instance constitute the ground truth or ground for comparison (e.g., Gulshan et al, 2016;Esteva et al, 2017;Oakden-Rayner, 2018;Cabitza et al, 2020;Scheek et al 2021).…”
Section: Social Science Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…Oakden-Rayner's critique contains important epistemological questions that deserve consideration: questions about how comparisons can be made (especially between algorithms and human experts), and how data is labelled (who labels the data, who inspects the data and whether experts with relevant clinical experience are considered). Labels and codes or criteria for comparing performances come to matter greatly when it comes to validation because they are based on the so-called 'ground truth' of features that the algorithm has learned in the training data -the labels, annotations, or codes in this instance constitute the ground truth or ground for comparison (e.g., Gulshan et al, 2016;Esteva et al, 2017;Oakden-Rayner, 2018;Cabitza et al, 2020;Scheek et al 2021).…”
Section: Social Science Literaturementioning
confidence: 99%
“…In this article, we consider the central question of trust in Artificial Intelligence (AI) technologies for medical diagnosis. As AI becomes increasingly integrated into existing workflows and implemented to support diagnosis and treatment, clinical experts will find it difficult to understand how AI algorithms have been validated: this is where the problem of trust arises (Scheek et al, 2021). For many clinical and technical experts (such as computer and data scientists), trust is a matter of explainability and transparency of the algorithm, or the justification of the outputs of an algorith-mic model (Tonekaboni et al, 2019;Barda, 2019;Cutillo et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Querying data sets: 'That's where we get the ground truth label from' AI algorithms rely on data sets for training and testing the algorithm's capacity to learn. This means that the querying of data sets and process of checking the quality of datasets by clinical experts are the crucial first steps of any AI development (Laï, Brian, and Mamzer 2020;Oakden-Rayner 2017Scheek, Rezazade Mehrizi, and Ranschaert 2021;Sendak et al 2020). In our study, the querying of clinical data sets was a major recurring theme throughout our conversations.…”
Section: Three Processes Of Development: Querying Data Sets Building ...mentioning
confidence: 99%
“…Although AI is talked about as an already complete tool, ready for use in some domains, the development process of AI (like other technologies) plays a crucial role in laying the ground for the acceptance of the AI application (Elish 2018;Elish and Watkins 2020). In fact, the many claims regarding the positive potential of AI are counterbalanced by worries about its failings and implications, such as issues relevant to trust or mistrust (Asan, Bayrak, and Choudhury 2020;Jacobs et al 2021;Lee and Rich 2021), accountability or responsibility (Elish 2018;Lysaght et al 2019;Sendak et al 2020;Sullivan and Schweikart 2019), bias (Challen et al 2019;Cirillo et al 2020;Gianfrancesco et al 2018;Obermeyer et al 2019;Tupasela and Di Nucci 2020), healthcare data set quality (Oakden-Rayner 2017Laï, Brian, and Mamzer 2020;Scheek, Rezazade Mehrizi, and Ranschaert 2021), deskilling (Cabitza, Rasoini, and Gensini 2017;Floridi et al 2018;Laï, Brian, and Mamzer 2020); job displacement (Recht and Bryan 2017;Strohm 2019); data privacy and security (Ipsos MORI 2017;Redmore 2019). Many of these issues involve a need for transparency or the lack thereof (Tonekaboni et al 2019;Shortliffe and Sepúlveda 2018;Grote and Berens 2019;Harwich and Laycock 2018;Montani and Striani 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Providing a single image with segmentation labeling can easily require 2-15 min in simple cases and longer for more complex cases, per image and per person (8)(9)(10)(11). To avoid human bias, each image is typically labeled by several human labelers with three labelers being a typical number (12,13). For radiological use cases, the labeler must be a trained radiologist, who makes this process costly and temporarily prevents the radiologist from working with patients.…”
Section: Introduction To Labelingmentioning
confidence: 99%