2019
DOI: 10.1001/jama.2019.15064
|View full text |Cite
|
Sign up to set email alerts
|

Potential Liability for Physicians Using Artificial Intelligence

Abstract: Artificial intelligence (AI) is quickly making inroads into medical practice, especially in forms that rely on machine learning, with a mix of hope and hype. 1 Multiple AI-based products have now been approved or cleared by the US Food and Drug Administration (FDA), and health systems and hospitals are increasingly deploying AI-based systems. 2 For example, medical AI can support clinical decisions, such as recommending drugs or dosages or interpreting radiological images. 2 One key difference from most tradit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
176
0
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 291 publications
(202 citation statements)
references
References 5 publications
1
176
0
3
Order By: Relevance
“…including China [25,[45][46][47], and also concerns the members of French regulatory agencies who clearly want to be able to evaluate AI software before going forward. In the US, from the physician's perspective, this issue could even reduce the level of medical innovation if "the "safest" way to use medical AI from a liability perspective is as a confirmatory tool to support existing decision-making processes, rather than as a source of ways to improve care" [48]. For individuals without a conflict of interest, the main concern is to protect the population, for example, by creating a victim compensation fund if necessary.…”
Section: A Strong Need To Define the Responsibilities Of Each Stakehomentioning
confidence: 99%
“…including China [25,[45][46][47], and also concerns the members of French regulatory agencies who clearly want to be able to evaluate AI software before going forward. In the US, from the physician's perspective, this issue could even reduce the level of medical innovation if "the "safest" way to use medical AI from a liability perspective is as a confirmatory tool to support existing decision-making processes, rather than as a source of ways to improve care" [48]. For individuals without a conflict of interest, the main concern is to protect the population, for example, by creating a victim compensation fund if necessary.…”
Section: A Strong Need To Define the Responsibilities Of Each Stakehomentioning
confidence: 99%
“…We note in particular a guide to reading the literature [10], an accompanying editorial [11], and a viewpoint review [12] of the National Academy of Medicine's comprehensive exploration of AI in healthcare [13]. Possible biases in the design and development of AI systems in conjunction with EHRs have also been explored [14], as has their remediation [15] and the potential legal liability risk for a provider using AI [16]. Considering the influential regulatory framework in the US on Software as a Medical Device, how should the lifecycle of an AI system be viewed, especially if it is adaptive and-at least in theory-self-improving [17]?…”
Section: Resultsmentioning
confidence: 99%
“…Currently, physicians are protected as long as they follow ''standard of care,'' but as ML becomes more accurate it may become the ''standard of care'' over previous practices. According to Price et al [108], the safest way to use ML is to use it only as a confirmatory tool to support existing decisionmaking processes and to check with individual malpractice insurers. Physicians are likely to influence how ML is used in practice and when it should be applied in place of human decision.…”
Section: Regulation and Liability In Machine Learningmentioning
confidence: 99%