2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.110
|View full text |Cite
|
Sign up to set email alerts
|

Improving Interpretability of Deep Neural Networks with Semantic Information

Abstract: Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems. However, it is challenging to reason about what a DNN actually does due to its opaque or black-box nature. To address this issue, we propose a novel technique to improve the interpretability of DNNs by leveraging the rich semantic information … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
60
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 96 publications
(66 citation statements)
references
References 27 publications
1
60
0
1
Order By: Relevance
“…The interpretability of DNN is essential in helping users understand the overall strengths and weaknesses of the model . Although we have given some explanations about why deep learning works for our specific study, much more work is needed to open up the “black‐box” and improve the interpretability of DNN.…”
Section: Discussionsupporting
confidence: 58%
“…The interpretability of DNN is essential in helping users understand the overall strengths and weaknesses of the model . Although we have given some explanations about why deep learning works for our specific study, much more work is needed to open up the “black‐box” and improve the interpretability of DNN.…”
Section: Discussionsupporting
confidence: 58%
“…In recent years, a variety of successive models [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][18][19][20] have achieved promising results. To generate captions, semantic concepts or attributes of objects in images are detected and utilized as inputs of the RNN decoder [3,6,12,20,22]. To generate captions, semantic concepts or attributes of objects in images are detected and utilized as inputs of the RNN decoder [3,6,12,20,22].…”
Section: Deep Image Captioningmentioning
confidence: 99%
“…To integrate topic representation into the training process, the authors of [3] introduced an interpretive loss, which helps to improve the interpretability of the learned features. These loss functions are designed to suit their own algorithms.…”
Section: Deep Image Captioningmentioning
confidence: 99%
See 2 more Smart Citations