2019
DOI: 10.48550/arxiv.1911.12116
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

Abstract: Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized to be non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificial datasets, often with bias or contaminated discriminating content. Through their increased distribution, decision-making algorithms can contribute promoting prejudge and unfairness which is not easy … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(36 citation statements)
references
References 62 publications
0
33
0
Order By: Relevance
“…With this parameterization, we interpret the proportionality constant τ (k) 1 [m] as learning the gain factor between the image domain noise-level σ and the m-th subband's noise-level at layer k. This framework has the added benefit of decoupling noise-level estimation from denoising, allowing for trade-off between accurate estimation and speed at inference time. We explore this trade-off using two different noise-level estimation algorithms at inference time in Section IV-D.…”
Section: Noise-adaptive Thresholdsmentioning
confidence: 99%
“…With this parameterization, we interpret the proportionality constant τ (k) 1 [m] as learning the gain factor between the image domain noise-level σ and the m-th subband's noise-level at layer k. This framework has the added benefit of decoupling noise-level estimation from denoising, allowing for trade-off between accurate estimation and speed at inference time. We explore this trade-off using two different noise-level estimation algorithms at inference time in Section IV-D.…”
Section: Noise-adaptive Thresholdsmentioning
confidence: 99%
“…The Explainable AI literature is blooming parallelly with the advances of DL models, and so is the set of surveys doing a great job at classifying the various methods [5,14,38]. We particularly focus on attribution methods, i.e.…”
Section: Related Work: Explainable Deep Learning and Compositional Pa...mentioning
confidence: 99%
“…With the increasing complexity of modern Neural Networks [15], it is a difficult task to explain what particular features do influence the prediction. Therefore, DNNs have often been considered as 'black-box' [16], [17]. However, especially in security-critical applications (such as autonomous driving or personalized medicine), transparency of the decision-making model is mandatory and therefore the network's inability to explain its predictions restricts the applicability of ML systems.…”
Section: Introductionmentioning
confidence: 99%