2021
DOI: 10.3390/electronics11010083
|View full text |Cite
|
Sign up to set email alerts
|

Caffe2Unity: Immersive Visualization and Interpretation of Deep Neural Networks

Abstract: Deep neural networks (DNNs) dominate many tasks in the computer vision domain, but it is still difficult to understand and interpret the information contained within these networks. To gain better insight into how a network learns and operates, there is a strong need to visualize these complex structures, and this remains an important research direction. In this paper, we address the problem of how the interactive display of DNNs in a virtual reality (VR) setup can be used for general understanding and archite… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 16 publications
(20 reference statements)
0
4
0
Order By: Relevance
“…To further evaluate and interpret the learned representations, we identified Shapley value-based (Lundberg and Lee, 2017) influential regions between different types of test inputs (Figure 10). Here, we did not calculate layer-wise Shapley values but only considered the test images to see which image regions were important for the network using our previous work (Aamir et al, 2022). The reason for this analysis is that we wanted to identify what the network looks at in making its decision.…”
Section: Bi-directional Interpretation Of Influence Scores Via Shaple...mentioning
confidence: 99%
“…To further evaluate and interpret the learned representations, we identified Shapley value-based (Lundberg and Lee, 2017) influential regions between different types of test inputs (Figure 10). Here, we did not calculate layer-wise Shapley values but only considered the test images to see which image regions were important for the network using our previous work (Aamir et al, 2022). The reason for this analysis is that we wanted to identify what the network looks at in making its decision.…”
Section: Bi-directional Interpretation Of Influence Scores Via Shaple...mentioning
confidence: 99%
“…Unity, as a gaming engine, was a common choice among researchers. For instance, [21] used Unity to create a virtual reality environment for visualizing deep neural networks built using the Caffe framework.The research [22] and [23] both utilized Unity in their research, illustrating its dual functionality. They used Unity for creating virtual reality environments and developing deep convolutional neural network models.…”
Section: A Virtual Reality and Ai Systems (Rq1)mentioning
confidence: 99%
“…Aamir et al [10] presented a novel approach to immersively visualize and interpret deep networks in VR, where the user can move freely inside an AlexNet [28]. The layers are represented as a sequence of 2D planes in 3D space showing the activations, which we adopted in our approach.…”
Section: D Visualization Approachesmentioning
confidence: 99%
“…• Rotate the image within the range of [1,10] degrees with a probability of 30%. • Scale the image with a factor from range [1., 1.05] and a probability of 5%.…”
Section: Feature Visualizationmentioning
confidence: 99%
See 1 more Smart Citation