2021
DOI: 10.36227/techrxiv.14777772
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Explaining the Black-box Smoothly- A Counterfactual Approach

Abstract: <p>We propose a BlackBox <i>Counterfactual Explainer</i> that is explicitly developed for medical imaging applications. Classical approaches (<i>e.g.,</i> saliency maps) assessing feature importance do not explain <i>how</i> and <i>why</i> variations in a particular anatomical region are relevant to the outcome, which is crucial for transparent decision making in healthcare application. Our framework explains the outcome by gradually <i>exaggerating&… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…They could not lead us to use real images to even partially reach the clear observation we could make otherwise from synthetic ones. Some recent work has used generative models to explain what a classifier learnt still by pointing out already perceptible or known features (that were actually used to annotate the pictures), but, to our knowledge, never evaluated on invisible cell phenotypes in the context of various assays (Lang et al 2021; Singla et al 2021). Furthermore, our approach does not focus on explaining a trained classifier and therefore does not require one.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…They could not lead us to use real images to even partially reach the clear observation we could make otherwise from synthetic ones. Some recent work has used generative models to explain what a classifier learnt still by pointing out already perceptible or known features (that were actually used to annotate the pictures), but, to our knowledge, never evaluated on invisible cell phenotypes in the context of various assays (Lang et al 2021; Singla et al 2021). Furthermore, our approach does not focus on explaining a trained classifier and therefore does not require one.…”
Section: Discussionmentioning
confidence: 99%
“…This principle has since been widely used and improved in many ways, in order to generate various kinds of data. Numerous compelling works in image generation and translation have been proposed, including recent work explaining black box classifiers, but, to our knowledge, never with the aim of explaining invisible changes between conditions (Choi et al 2018; Baek et al 2020; Zhu et al 2017; Lang et al 2021; Singla et al 2021). On the contrary, because the aim is different, domains chosen for such work were usually visually different in order to validate the approaches.…”
Section: Introductionmentioning
confidence: 99%
“…In other words, it identifies what altered characteristics would have led to a different model prediction. However, applications to neuroimaging (Pawlowski et al, 2020), and even medical imaging (Major et al, 2020;Singla et al, 2021) more generally, are currently few and the utility across neuroimaging tasks needs to be explored.…”
Section: Interrogating the Decision Boundarymentioning
confidence: 99%
“…However, the second one measures humans ability to predict model outputs given different inputs or model parameters variations regardless of "why?". Thus, ML-based models, especially in DL, suffer from the "black box" effect [193,196,292] that can be interpretable but hardly explainable. Not surprisingly, techniques such as disentangling [213,344,346] have recently re-emerged in the field to provide explainability in DNNs.…”
Section: Ai Learning Principles With Emphasis In Lung Analysismentioning
confidence: 99%