2021
DOI: 10.3390/make3040048
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

Abstract: Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of trans… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
63
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 186 publications
(90 citation statements)
references
References 76 publications
1
63
0
Order By: Relevance
“…Essentially neural networks are non-linear mappings with many parameters. Due to a large number of parameters, they are referred to as a "black-box" [6]. A loss function is defined to measure the difference between the output of the network with the known (expected) output.…”
Section: Input Layermentioning
confidence: 99%
“…Essentially neural networks are non-linear mappings with many parameters. Due to a large number of parameters, they are referred to as a "black-box" [6]. A loss function is defined to measure the difference between the output of the network with the known (expected) output.…”
Section: Input Layermentioning
confidence: 99%
“…The synaptic weight informs about the amplitude or the strength of the connection between two nodes (neurons). Since ANNs are black-box models because of their multilayer nonlinear structure, the explanation of the underlying process that produces the relationship between the dependent (target) and independent (predictors) variables is unintelligible, nontransparent, and untraceable by humans [ 30 ]. The overall survival outcome and the molecular subtypes of patients with diffuse large B-cell lymphoma (DLBCL) were predicted with high accuracy, and the most relevant genes were highlighted using nonlinear analysis.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the DNN is defined as a "black box", mainly because there is no primary theoretical basis. Scholars divide the interpretability of the DNN into two types: post hoc interpretability and intrinsic interpretability [63,64]. The post hoc interpretability interprets the decisions in the actual application [65], in which the visualization method is the widely used approach [66].…”
Section: Characteristic Of the Dnn's Black Boxmentioning
confidence: 99%