2020
DOI: 10.1073/pnas.1907375117
|View full text |Cite
|
Sign up to set email alerts
|

Understanding the role of individual units in a deep neural network

Abstract: Deep neural networks excel at finding hierarchical representations that solve complex tasks over large datasets. How can we humans understand these learned representations? In this work, we present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concept… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
208
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 292 publications
(235 citation statements)
references
References 29 publications
4
208
0
1
Order By: Relevance
“…Further methods do not seek to explain in terms of input features but in terms of the latent space, where the directions in the latent space code for higher level concepts, such as color, material, object part, or object [17], [18], [205]. In particular, the TCAV method [89] produces a latent-space explanation for every individual prediction.…”
Section: E Other Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Further methods do not seek to explain in terms of input features but in terms of the latent space, where the directions in the latent space code for higher level concepts, such as color, material, object part, or object [17], [18], [205]. In particular, the TCAV method [89] produces a latent-space explanation for every individual prediction.…”
Section: E Other Methodsmentioning
confidence: 99%
“…Let us use for this the Adience benchmark data set [44], providing 26 580 images captured "in the wild" and labeled into eight ordinal groups of age ranges {(0-2), (4-6), (8)(9)(10)(11)(12)(13), (15)(16)(17)(18)(19)(20), (25)(26)(27)(28)(29)(30)(31)(32), (38)(39)(40)(41)(42)(43), (48)(49)(50)(51)(52)(53), (60+)}.…”
Section: A Example 1: Validating a Face Classifiermentioning
confidence: 99%
See 1 more Smart Citation
“…Every neuron has a weighted input, an activation function, and an output. The activation function determines the output depending on the input of the neuron [22]. It acts as a trigger that depends on the weighted input.…”
Section: Deep Reinforcement Learningmentioning
confidence: 99%
“…Hence, a second prediction is that an accurate neural model should have similar cortical responses to a line drawing as to they do to a corresponding realistic image. Moreover, probing how these networks operate, for example, as in Zeiler and Fergus (2014) and Bau et al. (2020), could provide more precise understanding of how precisely line drawings are interpreted.…”
Section: Possible Predictions and Experimentsmentioning
confidence: 99%