2017
DOI: 10.1016/j.procs.2017.11.174
|View full text |Cite
|
Sign up to set email alerts
|

Visualization of maximizing images with deconvolutional optimization method for neurons in deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…It is worth noting that the simple multiplication of feed-forward weight matrices gave quite recognizable visualizations of digits learned by the output neurons (the 2nd row in Figure 5 ). Slightly better and less noisy results were obtained by the method described in Nekhaev and Demin ( 2017 ) (the 1st row in Figure 5 ). It was also shown that the forward and reciprocal weights between the hidden and output layers were highly correlated.…”
Section: Resultsmentioning
confidence: 86%
See 1 more Smart Citation
“…It is worth noting that the simple multiplication of feed-forward weight matrices gave quite recognizable visualizations of digits learned by the output neurons (the 2nd row in Figure 5 ). Slightly better and less noisy results were obtained by the method described in Nekhaev and Demin ( 2017 ) (the 1st row in Figure 5 ). It was also shown that the forward and reciprocal weights between the hidden and output layers were highly correlated.…”
Section: Resultsmentioning
confidence: 86%
“… Last layer weights visualization. The first row contains reconstructed maximizing images for the output neurons (Nekhaev and Demin, 2017 ). The second row is a simple product of two forward weight matrices: one of the size 784 × 100 and the other of size 100 × 10.…”
Section: Resultsmentioning
confidence: 99%
“…Various studies have been conducted in the literature that deals with the subject of artificial neural networks and graph. Nekhaev & Demin (2017) visualized the neural network and its hidden layers to solve the problem of the incomprehensibility of the operation of deep artificial neural networks. Liu (2017) significantly reduced the search time by performing the searches on the neural network they dealt with in the article they published on the graph structure.…”
Section: Methodsmentioning
confidence: 99%
“…In addition, many saliency methods are criticized recently for giving misleading visualization interpretations and researchers are advised to use them with caution [34]. To unveil the CNN models further, the direct deconvolution and indirect optimization are the two major approaches [35]. Deconvolution starts with finding an image from the dataset that triggers high activity to the neuron of interest and the gradient of neuron activity is calculated [35].…”
Section: Visualization Of Cnn Modelsmentioning
confidence: 99%
“…To unveil the CNN models further, the direct deconvolution and indirect optimization are the two major approaches [35]. Deconvolution starts with finding an image from the dataset that triggers high activity to the neuron of interest and the gradient of neuron activity is calculated [35]. In general, a deconvolutional network is a reversed convolutional network, which maps features back to pixels [36].…”
Section: Visualization Of Cnn Modelsmentioning
confidence: 99%