2019
DOI: 10.1001/amajethics.2019.131
|View full text |Cite
|
Sign up to set email alerts
|

Should Watson Be Consulted for a Second Opinion?

Abstract: This article discusses ethical responsibility and legal liability issues regarding use of IBM Watson TM for clinical decision making. In a case, a patient presents with symptoms of leukemia. Benefits and limitations of using Watson or other intelligent clinical decision-making tools are considered, along with precautions that should be taken before consulting artificially intelligent systems. Guidance for health care professionals and organizations using artificially intelligent tools to diagnose and to develo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…For instance, it will create a risky situation for both clinicians and patients when it is still unclear who becomes responsible if AI clinical applications offer wrong health care recommendations [ 54 ]. There is also no precise regulation regarding who is held liable when a physician follows the medical recommendations provided by AI and when a physician decides to override the recommendations [ 55 ].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, it will create a risky situation for both clinicians and patients when it is still unclear who becomes responsible if AI clinical applications offer wrong health care recommendations [ 54 ]. There is also no precise regulation regarding who is held liable when a physician follows the medical recommendations provided by AI and when a physician decides to override the recommendations [ 55 ].…”
Section: Introductionmentioning
confidence: 99%
“…The proposal we are concentrating on is to use diagnostic systems as automated second opinions (eg, Luxton 17 ). The core thesis of this proposal is that AI-DSS could replace human doctors in providing additional diagnostic services, akin to a second opinion.…”
Section: Using Ai To Provide Second Opinionsmentioning
confidence: 99%
“…Some patients may feel able to control and manage their disease, with passive surveillance and/or less contact with the clinician, whereas others may feel overwhelmed by additional responsibilities [55]. AI may also create unrealistic expectations in some patients regarding clinical outcomes, which could have a negative impact on their care and service experience [56]. In addition, some AI-based decisions could be perceived as a restriction on the patient's right to make a free and informed decision [1,53].…”
Section: Human and Cognitive Dimensionsmentioning
confidence: 99%