2018
DOI: 10.1056/nejmp1714229
|View full text |Cite
|
Sign up to set email alerts
|

Implementing Machine Learning in Health Care — Addressing Ethical Challenges

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
553
0
8

Year Published

2018
2018
2020
2020

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 929 publications
(671 citation statements)
references
References 4 publications
0
553
0
8
Order By: Relevance
“…In addition, a safeguard against “learned helplessness” must be used as a means to curb high reliance on automation and the ultimate abandonment of common sense. Finally, automated systems also might challenge the dynamics of responsibility within the doctor‐patient relationship, as well as the expectation of confidentiality …”
Section: Challenges and Future Directionsmentioning
confidence: 99%
“…In addition, a safeguard against “learned helplessness” must be used as a means to curb high reliance on automation and the ultimate abandonment of common sense. Finally, automated systems also might challenge the dynamics of responsibility within the doctor‐patient relationship, as well as the expectation of confidentiality …”
Section: Challenges and Future Directionsmentioning
confidence: 99%
“…8 As a result, the algorithms may not offer benefit to people whose data are missing from the data set. 9 …”
Section: Missing Data and Patients Not Identified By Algorithmsmentioning
confidence: 99%
“…Automation is important, but overreliance on automation is not desirable. 8,20 Computer scientists and bioinformaticians, together with practitioners, biostatisticians, and epidemiologists, should outline the “intent behind the design,” 9(p982) including choosing appropriate questions and settings for machine learning use, interpreting findings, and conducting follow-up studies. Such measures would increase the likelihood that the results of the models are meaningful and ethical and that clinical decision support tools based on these algorithms have beneficial effects.…”
Section: Recommendationsmentioning
confidence: 99%
“…Clinical data sets tend to be small, and models trained on limited observations of a certain type of data (eg, recorded in a silent room, Caucasian speakers, adults) may not even extrapolate to data that seems to be similar. Furthermore, algorithms are susceptible to learning biases inherent in the data used to train them (eg, incorrectly assigning lower disorder severity to African Americans because less of them have the disorder in the training set) . Critically, many high‐performing algorithms (eg, deep neural networks, proprietary models) are “black boxes,” since it is currently not understood how these models combine features to output the severity of a disorder.…”
Section: Introductionmentioning
confidence: 99%