2019
DOI: 10.1007/s13347-019-00359-6
|View full text |Cite|
|
Sign up to set email alerts
|

A Tale of Two Deficits: Causality and Care in Medical AI

Abstract: In this paper, two central questions will be addressed: ought we to implement medical AI technology in the medical domain? If yes, how ought we to implement this technology? I will critically engage with three options that exist with respect to these central questions: the Neo-Luddite option, the Assistive option, and the Substitutive option. I will first address key objections on behalf of the Neo-Luddite option: the Objection from Bias, the Objection from Artificial Autonomy, the Objection from Status Quo, a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 45 publications
(38 reference statements)
0
2
0
Order By: Relevance
“…According to the figure, we have two groups of people for whom the explanations should be tailored, the data scientists and the healthcare professionals. Since the theories from psychology and cognitive science provide evidence that humans are able to make causal inferences [58], generating explanations from such models can help us enhance the interpretability of the explanations. Moreover, Miller [59] suggests that, to have interpretability, explanations should also explain in terms of contrasting events (why event X happened instead of Y).…”
Section: Towards Causal Interpretability For Accountabilitymentioning
confidence: 99%
“…According to the figure, we have two groups of people for whom the explanations should be tailored, the data scientists and the healthcare professionals. Since the theories from psychology and cognitive science provide evidence that humans are able to make causal inferences [58], generating explanations from such models can help us enhance the interpretability of the explanations. Moreover, Miller [59] suggests that, to have interpretability, explanations should also explain in terms of contrasting events (why event X happened instead of Y).…”
Section: Towards Causal Interpretability For Accountabilitymentioning
confidence: 99%
“…Chen ( 2019 ) provides a very interesting discussion of AI and causality but not from the perspective of the RCT issue that I raise here but as a much broader but still relevant point of view. He advances the key question about whether AI technology should be adopted in the medical field.…”
Section: The Causality Conundrum: Do We Still Need Rcts?mentioning
confidence: 99%