2019
DOI: 10.1038/s42256-019-0048-x
|View full text |Cite
|
Sign up to set email alerts
|

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

Abstract: Black box machine learning models are currently being used for high stakes decision-making throughout society, causing problems throughout healthcare, criminal justice, and in other domains. People have hoped that creating methods for explaining these black box models will alleviate some of these problems, but trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

26
3,002
3
21

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 4,767 publications
(3,515 citation statements)
references
References 39 publications
26
3,002
3
21
Order By: Relevance
“…Furthermore, as many articles have done, it is useful to attempt to link changes in acoustic features to psychiatric behaviors or symptoms (eg, low f0 variability with flat affect), and use these links for testing hypotheses. There is an ongoing debate around whether complex, difficult to interpret models that perform well should be sacrificed for lower performing but simpler to interpret models . From our point of view, in current medicine, we would not want to discard complex diagnostic tools (eg, biopsy) for simpler but less effective ones (eg, lump palpation).…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, as many articles have done, it is useful to attempt to link changes in acoustic features to psychiatric behaviors or symptoms (eg, low f0 variability with flat affect), and use these links for testing hypotheses. There is an ongoing debate around whether complex, difficult to interpret models that perform well should be sacrificed for lower performing but simpler to interpret models . From our point of view, in current medicine, we would not want to discard complex diagnostic tools (eg, biopsy) for simpler but less effective ones (eg, lump palpation).…”
Section: Discussionmentioning
confidence: 99%
“…It achieves better prediction, but may not contribute in understanding of the underlying phenomenon. Recently, interpretable machine learning models (explainable AI) are of broad interest [23,24]. In future work, it would be fruitful to extend our method by incorporating some of these ideas for better interpretability.…”
Section: Resultsmentioning
confidence: 99%
“…In recent years, there have been substantial efforts to develop mor human-interpretable machine learning tools in response to the ethical an safety concerns of using 'blackbox' algorithms in medicine [15] or i high stake decisions [16]. A perspective on Nature Machine Intelligenc [16], the Explainable Machine Learning Challenge in 2018 [47], an other initiatives serve as reminders of the ethical advantages of using interpretable white-box models over blackbox. Novel software package and methods (i.e., [48,49]) bring elements of ensemble learning and RF into the linear model space to combine the high accuracy of ensembl learners with interpretability of generalized linear models.…”
Section: Moving Towards Interpretable White-box Algorithmsmentioning
confidence: 99%
“…Unfortunately, these techniques do not easily scale computationally nor memory-wise for identifying molecular interactions, seriously limiting their translational utility in medicine and increasing the complexity of their implementation in distributed computing. In addition, there is an increasing consensus among clinicians and machine learning experts that ethical and safe translation of machine learned algorithms for high stake clinical decisions should be interpretable and explainable [15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%