2023
DOI: 10.3174/ajnr.a7963
|View full text |Cite
|
Sign up to set email alerts
|

Ethical Considerations and Fairness in the Use of Artificial Intelligence for Neuroradiology

C.G. Filippi,
J.M. Stein,
Z. Wang
et al.

Abstract: In this review, concepts of algorithmic bias and fairness are defined qualitatively and mathematically. Illustrative examples are given of what can go wrong when unintended bias or unfairness in algorithmic development occurs. The importance of explainability, accountability, and transparency with respect to artificial intelligence algorithm development and clinical deployment is discussed. These are grounded in the concept of "primum no nocere" (first, do no harm). Steps to mitigate unfairness and bias in tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…In terms of the other failure modes identified, patient setup position and non‐standard anatomy, it is likely that such failures are caused by the absence of sufficient numbers of these “unusual” case types in the model training data set of the AI‐based auto‐segmentation software. This sort of dataset bias in AI is a well‐known 29–34 and should be expected, given the relative frequency of such cases in the clinic. In addition, due to the observed wide anatomical variation in patients with non‐standard anatomy, it may not be possible to include sufficient numbers of this patient type in a model training data set for the training to be sufficiently effective due to the large patient dataset numbers typically required to train DL models 35 …”
Section: Discussionmentioning
confidence: 93%
“…In terms of the other failure modes identified, patient setup position and non‐standard anatomy, it is likely that such failures are caused by the absence of sufficient numbers of these “unusual” case types in the model training data set of the AI‐based auto‐segmentation software. This sort of dataset bias in AI is a well‐known 29–34 and should be expected, given the relative frequency of such cases in the clinic. In addition, due to the observed wide anatomical variation in patients with non‐standard anatomy, it may not be possible to include sufficient numbers of this patient type in a model training data set for the training to be sufficiently effective due to the large patient dataset numbers typically required to train DL models 35 …”
Section: Discussionmentioning
confidence: 93%