2018
DOI: 10.1007/s11017-018-9442-3
|View full text |Cite
|
Sign up to set email alerts
|

Against the iDoctor: why artificial intelligence should not replace physician judgment

Abstract: Experts in medical informatics have argued for the incorporation of ever more machine-learning algorithms into medical care. As artificial intelligence (AI) research advances, such technologies raise the possibility of an "iDoctor," a machine theoretically capable of replacing the judgment of primary care physicians. In this article, I draw on Martin Heidegger's critique of technology to show how an algorithmic approach to medicine distorts the physician-patient relationship. Among other problems, AI cannot ad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 38 publications
(34 citation statements)
references
References 12 publications
0
34
0
Order By: Relevance
“…While on one end of the spectrum the most skeptical are dubious about the actual capabilities of AI, on the opposite end some (including the late Stephen Hawking) are worried AI may eventually surpass human intelligence and become uncontrollable (Hawking et al, 2014). In the medical field, there are concerns that machine learning may lead to physician deskilling (Cabitza et al, 2017) and cause a distortion of the doctor-patient relationship (Karches, 2018). However, such concerns are often not specific to AI or machine learning, but rather on the way they are employed and therefore other authors believe that an appropriate, informed use of AI may be beneficial and may greatly improve patient care (McDonald et al, 2017;EsteChanva et al, 2019;Liyanage et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…While on one end of the spectrum the most skeptical are dubious about the actual capabilities of AI, on the opposite end some (including the late Stephen Hawking) are worried AI may eventually surpass human intelligence and become uncontrollable (Hawking et al, 2014). In the medical field, there are concerns that machine learning may lead to physician deskilling (Cabitza et al, 2017) and cause a distortion of the doctor-patient relationship (Karches, 2018). However, such concerns are often not specific to AI or machine learning, but rather on the way they are employed and therefore other authors believe that an appropriate, informed use of AI may be beneficial and may greatly improve patient care (McDonald et al, 2017;EsteChanva et al, 2019;Liyanage et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Laser Destroys Cancer Cells Circulating in the Blood. The first study of a new treatment in humans demonstrates a non-invasive, harmless cancer killer; Smart Knife Detects Cancer in Seconds By excluding mention of human agency, these statements imply autonomous machine function, potentially denigrating human capacities and skills (Karches 2018), and hence the actors in a clinical encounter are the patient, their various influences, the physician, and an instantiated "machine entity" in a therapeutic triad (Swinglehurst et al 2014).…”
Section: Ontological Differencesmentioning
confidence: 99%
“…The key issues underpinning concerns about use of AI in medicine are the lack of empathy and intuition. One of the recent articles warning of the limitations of AI [14] refers to ''pre-conceptual background knowledge'' in relation to intuition or unintentional affectivity: solutions that lack empathy. ''Preconceptual understanding'' is the key to developing the empathy that provides clinicians with a clear focus on the patient and that underpins creativity, innovation, and patient safety within radiotherapy.…”
Section: Being Human or Being Henmentioning
confidence: 99%
“…More likely, however, is that, lacking a human preconceptual understanding of relevant knowledge, it would produce connections that are uninteresting at best and unintelligible at worst. ''[14].…”
mentioning
confidence: 99%