2020 International Conference on Data Mining Workshops (ICDMW) 2020
DOI: 10.1109/icdmw51313.2020.00068
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting Deep Neural Networks through Prototype Factorization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…A study involving end users to contrast this approach with other prototype-based methods could potentially lead to new insights regarding the interpretability of the extracted prototypes. Das et al [103] extract prototypes from the latent representation of the input data in a deep neural network. The prototypes are weighted, and a surrogate model is built from the prototypes.…”
Section: Instance-based Explanationsmentioning
confidence: 99%
“…A study involving end users to contrast this approach with other prototype-based methods could potentially lead to new insights regarding the interpretability of the extracted prototypes. Das et al [103] extract prototypes from the latent representation of the input data in a deep neural network. The prototypes are weighted, and a surrogate model is built from the prototypes.…”
Section: Instance-based Explanationsmentioning
confidence: 99%
“…Approaches for understanding DNNs typically focus on explaining individual model decisions post-hoc, i.e., they are designed to work on any pre-trained DNN. Examples of this include perturbation-based, [15], [16], [17], activation-based, [18], [19], or backpropagation-based explanations, [5], [20], [21], [22], [23], [24], [25], [26]. In order to obtain explanations for the B-cos networks, we also rely on a backpropagationbased approach.…”
Section: Related Workmentioning
confidence: 99%
“…Approaches for understanding DNNs typically focus on explaining individual model decisions post-hoc, i.e., they are designed to work on any pre-trained DNN. Examples of this include perturbation-based, [22,27,29], activationbased, [8,17], or backpropagation-based explanations, [3,31,33,34,36,37,39,45]. In order to obtain explanations for the B-cos networks, we also rely on a backpropagationbased approach.…”
Section: Related Workmentioning
confidence: 99%