2019
DOI: 10.1001/amajethics.2019.138
|View full text |Cite
|
Sign up to set email alerts
|

How Should Clinicians Communicate With Patients About the Roles of Artificially Intelligent Team Members?

Abstract: This commentary responds to a hypothetical case involving an assistive artificial intelligence (AI) surgical device and focuses on potential harms emerging from interactions between humans and AI systems. Informed consent and responsibility-specifically, how responsibility should be distributed among professionals, technology companies, and other stakeholders-for uses of AI in health care are discussed. Case Mr K is a 54-year-old man referred to Dr L's outpatient spine neurosurgery clinic because he has a 6-we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(24 citation statements)
references
References 9 publications
0
24
0
Order By: Relevance
“…AI agents should not follow a coercive approach to force patients to make health-related decisions under pressure. Regulations should illuminate patients’ roles in relation to AI applications so that they are aware of their position to refuse AI-based treatments where possible [ 97 ]. An important aspect that needs to be built into AI systems in health care is the transparency of AI algorithms so that the AI system does not remain a black box to the users.…”
Section: Discussionmentioning
confidence: 99%
“…AI agents should not follow a coercive approach to force patients to make health-related decisions under pressure. Regulations should illuminate patients’ roles in relation to AI applications so that they are aware of their position to refuse AI-based treatments where possible [ 97 ]. An important aspect that needs to be built into AI systems in health care is the transparency of AI algorithms so that the AI system does not remain a black box to the users.…”
Section: Discussionmentioning
confidence: 99%
“…We agree because the items we identified to be included in the information process require specialized knowledge. Specifically, physicians need to know how AI/ML applications are constructed, which data were used to train them, and what their limitations are ( 10 ).…”
Section: Discussionmentioning
confidence: 99%
“…The second implication for the general practitioner is that the patient's fears or overconfidence in AI-aided diagnosis must be addressed. This can be achieved by describing the risks and potential benefits of the AI system, e.g., by providing studies that compare diagnostic accuracy of medical AI compared with human eye doctors ( 10 ). If patients accept and are aware of the risks and limitations of AI-aided diagnosis, they will save a long wait for an ophthalmologist appointment.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Would this consent meet the ethical or legal standards we expect in healthcare? (14,15). The authors explain that they are concerned about consent because the nature of opaque systems means we necessarily cannot know how the systems are learning or drawing connections.…”
Section: The First Option: Opaque Systems In Healthcarementioning
confidence: 99%