2023
DOI: 10.3389/fphar.2023.1297353
|View full text |Cite|
|
Sign up to set email alerts
|

Liability for harm caused by AI in healthcare: an overview of the core legal concepts

Dane Bottomley,
Donrich Thaldar

Abstract: The integration of artificial intelligence (AI) into healthcare in Africa presents transformative opportunities but also raises profound legal challenges, especially concerning liability. As AI becomes more autonomous, determining who or what is responsible when things go wrong becomes ambiguous. This article aims to review the legal concepts relevant to the issue of liability for harm caused by AI in healthcare. While some suggest attributing legal personhood to AI as a potential solution, the feasibility of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 29 publications
(66 reference statements)
0
5
0
Order By: Relevance
“…On the other hand, autonomous AI systems (or strong AI tending to progressively become autonomous) might “independently identify normal X-rays and generate reports, bypassing radiologists” [ 19 ]. Thus, when AI is autonomous, tension will arise about the responsibility, especially when the physicians struggle to understand how the system works or when they are forced to seek multiple ways of validating their decision to follow or reject AI recommendations [ 32 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, autonomous AI systems (or strong AI tending to progressively become autonomous) might “independently identify normal X-rays and generate reports, bypassing radiologists” [ 19 ]. Thus, when AI is autonomous, tension will arise about the responsibility, especially when the physicians struggle to understand how the system works or when they are forced to seek multiple ways of validating their decision to follow or reject AI recommendations [ 32 ].…”
Section: Discussionmentioning
confidence: 99%
“…Finally, having different responsibilities at different stages of the sensor/AI system’s lifecycle seems to be a functioning approach used in the EU [ 32 ], particularly when there is not one piece of technology but a heterogeneous group with varying liability risks. It was mentioned in the article that there would be new regulation created specifically in the context of sensor/AI systems in healthcare, which could be a good way to provide a better framework, especially when it has been proven that liability laws can “encourage adoption of technologies that reduce harm (to users, workers, or the public)” [ 60 ].…”
Section: Discussionmentioning
confidence: 99%
“…However, this advantage could be restricted as many modern systems lack transparent reasoning. Moreover, the main disparity between human and AI decision-making lies in the morality of their actions ( 28 ). Indeed, human decision-making is influenced by moral considerations, a dimension completely absent in computer systems ( 2 ).…”
Section: Ai In Medico-legal Practice: Ethics and Liability Implicationsmentioning
confidence: 99%
“…The challenge arises when AI computational power far exceeds human intellect. In such instances, if AI is not embedded into care standards, care providers must take the risk by choosing between adherence to recognized guidelines or relying on AI outputs ( 28 ). The lack of a clear definition of the responsibility of both AI and the physician who uses it further complicates the ability to assess fault-based liability, due to the ambiguity surrounding carelessness ( 28 ).…”
Section: Ai In Medico-legal Practice: Ethics and Liability Implicationsmentioning
confidence: 99%
See 1 more Smart Citation