2017 IEEE SmartWorld, Ubiquitous Intelligence &Amp; Computing, Advanced &Amp; Trusted Computed, Scalable Computing &Amp; Commun 2017
DOI: 10.1109/uic-atc.2017.8397411
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability of deep learning models: A survey of results

Abstract: Abstract-Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary processincorporating these networks into mission critical processes such as medical diagnosis, planning and control -requires a level of trust associati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
189
0
5

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 262 publications
(195 citation statements)
references
References 28 publications
0
189
0
5
Order By: Relevance
“…This lack of interpretability has been widely regarded as a major reason impeding the pervasive application of DRL in the networking industry, albeit this is true in general with regard to its application in other domains as well. Active research has been undertaken to address this limitation and facilitate a better interpretability of learning algorithms [41].…”
Section: Discussion Of Results and Future Researchmentioning
confidence: 99%
“…This lack of interpretability has been widely regarded as a major reason impeding the pervasive application of DRL in the networking industry, albeit this is true in general with regard to its application in other domains as well. Active research has been undertaken to address this limitation and facilitate a better interpretability of learning algorithms [41].…”
Section: Discussion Of Results and Future Researchmentioning
confidence: 99%
“…Because we employed transfer learning, the features that were extracted were based on the ImageNet classification task, and it is unclear how these features related to MRI-specific artifacts. However, interpretability of deep learning is an ongoing active field of research (Chakraborty et al, 2017), and we may be able to fit more interpretable models in the future.…”
Section: Limitationsmentioning
confidence: 99%
“…However, the effectiveness of supervised training relies heavily on data quantity and label quality, especially in data with a wide range of data-specific factors of variations. Moreover, interpreting the results of these networks -important in areas such as clinical tasks -remains challenging [1].…”
Section: Introductionmentioning
confidence: 99%