2019
DOI: 10.1109/access.2019.2899901
|View full text |Cite
|
Sign up to set email alerts
|

Interpretation of Deep CNN Based on Learning Feature Reconstruction With Feedback Weights

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 22 publications
0
14
0
Order By: Relevance
“…As discussed above, 2D features can be regarded as gray scale images, since the neural network perceive a 2D input from value of each pixel not the object in human cognition. Therefore, a lot of saliency detection methods used in image processing, like LRP, input cropping, deconvolution, and gradient algorithms [16] can be also applied in SEI. Image-specific class saliency visualization (ISCSV), an effective saliency detection technique based on gradient algorithm, is the theoretical basis of our proposed method.…”
Section: Mc-featurementioning
confidence: 99%
See 1 more Smart Citation
“…As discussed above, 2D features can be regarded as gray scale images, since the neural network perceive a 2D input from value of each pixel not the object in human cognition. Therefore, a lot of saliency detection methods used in image processing, like LRP, input cropping, deconvolution, and gradient algorithms [16] can be also applied in SEI. Image-specific class saliency visualization (ISCSV), an effective saliency detection technique based on gradient algorithm, is the theoretical basis of our proposed method.…”
Section: Mc-featurementioning
confidence: 99%
“…Based on successful applications of ISCSV in saliency detection, it is considered that the inner recognition mechanism can probably be reflected by ISCSV. As mentioned in Section 1, various transformed features with 2 dimension can be regarded as an equivalent to the input in Equation (16). In the beginning of forward propagation of CNN, each pixel in 2D transformed feature is perceived by different convolutional kernel.…”
Section: Saliency Mapmentioning
confidence: 99%
“…Within-domain image translation has applications in domain adaptation [1]- [6], super-resolution [7], style transfer [8], and photo editing [9], Target and anomaly detection [10]- [13], and cross-domain image translation has applications in data generation [14], data interpretation [15], transformation of 3D images to their corresponding 3D representation for interpretation of deep CNN [16], and image completion [15], [17], [18]. The availability of a large amount of paired data for image translation makes convolutional neural network (CNN) approaches to regression highly attractive for both within-and cross-domain image translation, surpassing the performance of the state-of-the-art non-CNN approaches [19], [20].…”
Section: Related Workmentioning
confidence: 99%
“…The image-to-image translation problem is related to either computer vision, where the mapping is from many to one, or computer graphics, where the mapping is from one to many. Despite the similar nature of these tasks, they have been tackled separately by [1][2][3][4][5][6][7][8][9][10][11][12][13]. However, in our approach, we tackled this in a unified framework.…”
Section: Introductionmentioning
confidence: 99%