2022
DOI: 10.1007/s43681-022-00141-z
|View full text |Cite
|
Sign up to set email alerts
|

Explainable machine learning practices: opening another black box for reliable medical AI

Abstract: In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 31 publications
1
10
0
Order By: Relevance
“…This article discusses the danger of the hidden assumptions present in current applications of machine learning (ML in the following) in olfaction. Our analysis extends previous critiques of ML-models in biological systems that cautioned of methodological pitfalls, including incomplete data and small datasets or false positives, opacity in algorithm design, and a lack of empirical and contextual grounding (Johnson, 2008;Coveney et al, 2016;Ratti and López-Rubio, 2018;London, 2019;Ratti and Graves, 2022). The scientifically important issue raised with this analysis is that "[these pitfalls do] not explain how we arrive at the wrong model, just how we accept the wrong model" (Johnson, 2008, 25, added emphasis).…”
supporting
confidence: 71%
“…This article discusses the danger of the hidden assumptions present in current applications of machine learning (ML in the following) in olfaction. Our analysis extends previous critiques of ML-models in biological systems that cautioned of methodological pitfalls, including incomplete data and small datasets or false positives, opacity in algorithm design, and a lack of empirical and contextual grounding (Johnson, 2008;Coveney et al, 2016;Ratti and López-Rubio, 2018;London, 2019;Ratti and Graves, 2022). The scientifically important issue raised with this analysis is that "[these pitfalls do] not explain how we arrive at the wrong model, just how we accept the wrong model" (Johnson, 2008, 25, added emphasis).…”
supporting
confidence: 71%
“…3. And finally, related to the point above, trust in AI is not like trust in pharmaceuticals, as some suggest (see [50,68]), because we do not trust medical artifacts such as pharmaceuticals in their capacity as conveyers of information but rather as capable of performing a chemical intervention in our bodies.…”
Section: Discussionmentioning
confidence: 99%
“…Another way to see that they are indeed distinct concepts is to simply think of the ways in which they often fail to pair, or map onto one another, in human behavior: we often trust things that are not trustworthy and fail to trust things that are Ferrario and Loi [27]. For those interested in a technical approach to trust, issues related to transparency and explainability are prominent [22,27,68] (Ribeiro et al 2016). Often, proponents of this approach draw from insights concerning trust in other technological domains such as computer simulations (Durán and Formanek [21]) [22]), IT systems in general [15], or autonomous technologies such as cars [18,38].…”
Section: Trust In Aimentioning
confidence: 99%
“…Despite occasional optimism about rendering AI decision-making transparent (e.g., Mishra, 2021), most scholars remain concerned about the effects of biased AI used for medical purposes. Among these are concerns that biased AI will reduce persons to mere data (Sparrow and Hatherley, 2019), that AI might impermissibly (and invisibly) incorporate economic data in its rationing recommendations (Sparrow and Hatherley, 2020;Braun et al, 2021), and that AI will rely upon other value-laden considerations (Ratti and Graves, 2022). Again, this is merely a sampling of the technological risks associated with AI.…”
Section: Risks: Technological and Institutional Risks To Privacy And ...mentioning
confidence: 99%