2019
DOI: 10.1007/s13748-019-00179-x
|View full text |Cite
|
Sign up to set email alerts
|

Improving transparency of deep neural inference process

Abstract: Deep learning techniques are rapidly advanced recently, and becoming a necessity component for widespread systems. However, the inference process of deep learning is black-box, and not very suitable to safety-critical systems which must exhibit high transparency. In this paper, to address this black-box limitation, we develop a simple analysis method which consists of 1) structural feature analysis: lists of the features contributing to inference process, 2) linguistic feature analysis: lists of the natural la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 28 publications
(14 reference statements)
0
7
0
Order By: Relevance
“…Secondly, if many differently trained CNNs yield dissimilar results, how to produce a canonical comparer out of them? Alternatively, if using any of such CNNs for the task, how to audit their results [66] ? Thirdly, as reported by Nguyen et al [67] , wellperforming CNNs can lead to aberrant results that are misleading, even if they are produced with an almost-full ( > 99% ) confidence.…”
Section: Linear Feature Matching For Image Processingmentioning
confidence: 99%
“…Secondly, if many differently trained CNNs yield dissimilar results, how to produce a canonical comparer out of them? Alternatively, if using any of such CNNs for the task, how to audit their results [66] ? Thirdly, as reported by Nguyen et al [67] , wellperforming CNNs can lead to aberrant results that are misleading, even if they are produced with an almost-full ( > 99% ) confidence.…”
Section: Linear Feature Matching For Image Processingmentioning
confidence: 99%
“…Further, interpretability is also useful for performance improvement, debugging during training, and validating of training results. Developers can understand the internal behavior of a trained NN to train higher performance models [45]. For example, a developer can visualize an NN's focus points for an incorrect inference and understand what was wrong, before additional training data is collected according to the analysis.…”
Section: Verification Of Machine Learning Modelsmentioning
confidence: 99%
“…A quality sub-characteristic Operability has a measure Monitoring capability. Explainable AI (XAI) [8] is a rapid growing area in the artificial intelligence research, and techniques to explain or interpret ML components are proposed in recent years [13], [14]. They can be used for monitoring capacities for ML-based AI systems, but XAI research is still in the very early stage.…”
Section: Extension A1: Decomposition Of Evaluation Targetmentioning
confidence: 99%