Proceedings of the 9th International Joint Conference on Computational Intelligence 2017
DOI: 10.5220/0006495102150222
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Better Understanding of Deep Neural Networks Representations using Deep Generative Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 4 publications
0
6
0
Order By: Relevance
“…• Activation maximization, for example, based on Generative Adversarial Networks [56], use deep generative networks and tailored optimization methods to generate class-relevant inputs for convolutional neural networks [13]. A human user can then understand the internal representations assimilated by the network and the typical representations of the classes.…”
Section: Post-hoc Approaches: Explain Machine Learning Modelsmentioning
confidence: 99%
“…• Activation maximization, for example, based on Generative Adversarial Networks [56], use deep generative networks and tailored optimization methods to generate class-relevant inputs for convolutional neural networks [13]. A human user can then understand the internal representations assimilated by the network and the typical representations of the classes.…”
Section: Post-hoc Approaches: Explain Machine Learning Modelsmentioning
confidence: 99%
“…Attributions or feature attributions are one of the most popular techniques used to explain the model’s predictions . The attribution method assigns scores to each input feature that reflects the contribution of that feature to an ML model’s prediction, thereby explaining the role played by that feature in the prediction. ,, In the case of GNNs, attribution methods assign attribution scores to graph nodes and edges based on their contributions to the final prediction of the model.…”
Section: Methodsmentioning
confidence: 99%
“…Another compelling post hoc rationalization is a class of a explanations that reverse-engineer exemplars of a neuron or layer's receptive field. This can be done using activation maximization techniques that find the maximum value of the dot product between an activation vector and some iteratively-sampled image set (Erhan et al, 2009;Yosinski et al, 2015) or an iteratively generated image (Nguyen et al, 2016;Despraz et al, 2017). The maximal argument is taken as that neuron's receptive field, or preferred stimulus.…”
Section: Post Hoc Explanationsmentioning
confidence: 99%