2020
DOI: 10.3390/info11090426
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Neural Network Hidden Layer Activity Using Vector Fields

Abstract: Deep Neural Networks are known for impressive results in a wide range of applications, being responsible for many advances in technology over the past few years. However, debugging and understanding neural networks models’ inner workings is a complex task, as there are several parameters and variables involved in every decision. Multidimensional projection techniques have been successfully adopted to display neural network hidden layer outputs in an explainable manner, but comparing different outputs often mea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…In the past few years, the use of visualization tools and techniques gives a better insight into the propagation of data in NN hidden layers [32] exploring different aspects of NN training, topology, and parametrization [33]. Special attention is paid to the process of activation of neurons in hidden layers through sets of activation functions and data propagation within hidden layers of the network, and the results show that different activation results between hidden layers are essential in creating an efficient internal network architecture [34].…”
Section: Neural Network Model Descriptionmentioning
confidence: 99%
“…In the past few years, the use of visualization tools and techniques gives a better insight into the propagation of data in NN hidden layers [32] exploring different aspects of NN training, topology, and parametrization [33]. Special attention is paid to the process of activation of neurons in hidden layers through sets of activation functions and data propagation within hidden layers of the network, and the results show that different activation results between hidden layers are essential in creating an efficient internal network architecture [34].…”
Section: Neural Network Model Descriptionmentioning
confidence: 99%
“…The t-SNE and other similar techniques are robust against norm concentration, a typical characteristic of the curse of dimensionality [7], because of the shift-invariant similarities [69]. Although attaining good results if compared to other methods, in t-SNE projections, the observed distance between clusters is unreliable and sensitive to parametrization (perplexity parameter) [70], which can generate a misleading effect of cluster distance and shape resulting from the local nature of optimization and how hyperparameters are set up [8]. Nonetheless, t-SNE presents computational complexity equal to O(n 2 ), which impairs practical applications, but Maaten [67] developed an accelerated version that reaches O(n log n) through approximations.…”
Section: Stochastic Neighbor Embeddingmentioning
confidence: 99%
“…Furthermore, UMAP presents results as good as t-SNE (currently one of the widely used projection techniques) in visualization quality; nonetheless, it better preserves global data structures and is faster than t-SNE. In addition, UMAP was developed with machine learning applications in mind [8].…”
Section: Uniform Manifold Approximation For Dimension Reductionmentioning
confidence: 99%
See 1 more Smart Citation
“…They use a 2D projection to explain instance activation using t-SNE and place instances with similar activation values together. Cantareira et al [41] introduce an approach to exploring a neural network's hidden layer activities and simplifying the inner working mechanism of such complex networks. Their novel technique focuses on comparing projections derived from multiple stages in a neural network and visualizing the differences in perception.…”
Section: Visualization For Interpreting Modelsmentioning
confidence: 99%