2018
DOI: 10.1007/978-3-030-01249-6_41
|View full text |Cite
|
Sign up to set email alerts
|

ExplainGAN: Model Explanation via Decision Boundary Crossing Transformations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
33
0
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(39 citation statements)
references
References 15 publications
1
33
0
1
Order By: Relevance
“…[62] showed the model's partial decision boundary by traversing the latent space around a specific input, in order to show how the model behaves as the input changes. The methods were initially proposed in the computer vision domain [67,119], whereas Hase and Bansal [62] developed and adapted the method for text and tabular data.…”
Section: Example-based Are Often Called Counterfactual Examples By Pr...mentioning
confidence: 99%
“…[62] showed the model's partial decision boundary by traversing the latent space around a specific input, in order to show how the model behaves as the input changes. The methods were initially proposed in the computer vision domain [67,119], whereas Hase and Bansal [62] developed and adapted the method for text and tabular data.…”
Section: Example-based Are Often Called Counterfactual Examples By Pr...mentioning
confidence: 99%
“…Zhang et al provided an interpretation for regression saliency maps, as well as an adaptation of the perturbation-based quantitative evaluation of explanation methods [136]. ExplainGAN is a generative model that produces visually perceptible decision-boundary crossing transformations, which provide high-level conceptual insights that illustrate the manner in which a model makes decisions [137]. We proposed a barcode-like timeline to visualize the progress of the probability of substructure detection along with sweep scanning in US videos.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…Model Interpretability: PROVER follows a significant body of previous work on developing interpretable neural models for NLP tasks to foster explainability. Several approaches have focused on formalizing the notion of interpretability (Rudin, 2019;Doshi-Velez and Kim, 2017;Hase and Bansal, 2020), tweaking features for local model interpretability (Ribeiro et al, 2016(Ribeiro et al, , 2018 and exploring interpretability in latent spaces (Joshi et al, 2018;Samangouei et al, 2018). Our work can be seen as generating explanations in the form of proofs for an NLP task.…”
Section: Related Workmentioning
confidence: 99%