2021
DOI: 10.1073/pnas.2016917118
|View full text |Cite
|
Sign up to set email alerts
|

Neural network interpretation using descrambler groups

Abstract: The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neural nets perform abstract mathematical transformations that do not reduce to features or concepts. We present here… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…In a very recent and elegant study, it was shown that a neural network could be interpreted and thus it was shown how each of the hidden layers can be mapped to specific actions and mathematical transformations (Amey et al 2021 ). Such an approach is naturally highly attractive in order to fully understand the strengths and weaknesses of a DNN.…”
Section: Resultsmentioning
confidence: 99%
“…In a very recent and elegant study, it was shown that a neural network could be interpreted and thus it was shown how each of the hidden layers can be mapped to specific actions and mathematical transformations (Amey et al 2021 ). Such an approach is naturally highly attractive in order to fully understand the strengths and weaknesses of a DNN.…”
Section: Resultsmentioning
confidence: 99%
“…In a very recent and elegant study, it was shown that a neural network could be interpretated and thus it was shown how each of the hidden layers can be mapped to specific actions and mathematical transformations 18 . Such an approach is naturally highly attractive in order to fully understand the strengths and weaknesses of a DNN.…”
Section: Resultsmentioning
confidence: 99%
“…However, despite impressive successes met by artificial intelligence in finding patterns in complex datasets and predicting outputs, some unanswered questions hamper their contribution to our understanding of cell function. First, the complexity of trained networks caused them to be considered as black boxes, and attempts are currently done to decrease their opacity [54,55] or increase reliability [56,57]. Secondly, a well-known caveat of correlation studies is that a strong correlation between two parameters is not a proof of a causal relationship between these, and unveiling causal relationships is by no means a simple task [58].…”
Section: Data Processing With Multivariate Statistics and Machine Lea...mentioning
confidence: 99%