2020
DOI: 10.48550/arxiv.2007.15861
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Saliency-driven Class Impressions for Feature Visualization of Deep Neural Networks

Abstract: In this paper, we propose a data-free method of extracting Impressions of each class from the classifier's memory. The Deep Learning regime empowers classifiers to extract distinct patterns (or features) of a given class from training data, which is the basis on which they generalize to unseen data. Before deploying these models on critical applications, it is very useful to visualize the features considered to be important for classification. Existing visualization methods develop high confidence images consi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 12 publications
(18 reference statements)
0
2
0
Order By: Relevance
“…Here, we mention a few selected methods as examples of the plethora of approaches for understanding CNN decision-making: Saliency maps show the importance of each pixel to the classification decision (Springenberg et al, 2014;Bach et al, 2015;Smilkov et al, 2017;Zintgraf et al, 2017), concept activation vectors show a model's sensitivity to human-defined concepts (Kim et al, 2018), and other methods -amongst feature visualizations -focus on explaining individual units Bau et al (2020). Some tools integrate interactive, software-like aspects (Hohman et al, 2019;Wang et al, 2020;Carter et al, 2019;Collaris & van Wijk, 2020;OpenAI, 2020), combine more than one explanation method (Shi et al, 2020;Addepalli et al, 2020) or make progress towards automated explanation methods (Lapuschkin et al, 2019;Ghorbani et al, 2019b). As overviews, we recommend Zhang & Zhu (2018); Montavon et al (2018) and Samek et al (2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Here, we mention a few selected methods as examples of the plethora of approaches for understanding CNN decision-making: Saliency maps show the importance of each pixel to the classification decision (Springenberg et al, 2014;Bach et al, 2015;Smilkov et al, 2017;Zintgraf et al, 2017), concept activation vectors show a model's sensitivity to human-defined concepts (Kim et al, 2018), and other methods -amongst feature visualizations -focus on explaining individual units Bau et al (2020). Some tools integrate interactive, software-like aspects (Hohman et al, 2019;Wang et al, 2020;Carter et al, 2019;Collaris & van Wijk, 2020;OpenAI, 2020), combine more than one explanation method (Shi et al, 2020;Addepalli et al, 2020) or make progress towards automated explanation methods (Lapuschkin et al, 2019;Ghorbani et al, 2019b). As overviews, we recommend Zhang & Zhu (2018); Montavon et al (2018) and Samek et al (2020).…”
Section: Related Workmentioning
confidence: 99%
“…As such, a particular milestone for CNNs was understanding that features are formed in a hierarchical fashion (LeCun et al, 2015;Güc ¸lü & van Gerven, 2015;Goodfellow et al, 2016). Over the past few years, extensive investigations to better understand CNNs are based on feature visualizations (Olah et al, 2020b;a;Cammarata et al, 2020;Cadena et al, 2018), and the technique is being combined with other explanation methods (Olah et al, 2018;Carter et al, 2019;Addepalli et al, 2020;Hohman et al, 2019).…”
Section: Introductionmentioning
confidence: 99%