2020
DOI: 10.48550/arxiv.2012.01940
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Locally Linear Attributes of ReLU Neural Networks

Ben Sattelberg,
Renzo Cavalieri,
Michael Kirby
et al.

Abstract: A ReLU neural network determines/is a continuous piecewise linear map from an input space to an output space. The weights in the neural network determine a decomposition of the input space into convex polytopes and on each of these polytopes the network can be described by a single affine mapping. The structure of the decomposition, together with the affine map attached to each polytope, can be analyzed to investigate the behavior of the associated neural network.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…Our main idea is to harness the locally simple piecewise linear structure of NNs with piecewise linear activation functions, such as ReLU or Leaky ReLU NNs (cf. e.g., Hanin and Rolnick 2019;Sattelberg et al 2020). Note that our method is still generally applicable, since any activation function can be approximated by piecewise linear functions (Hu et al 2020;Liao et al 2023).…”
Section: How Can Pds Representing Aleatoric Uncertainty In Input Data...mentioning
confidence: 99%
See 1 more Smart Citation
“…Our main idea is to harness the locally simple piecewise linear structure of NNs with piecewise linear activation functions, such as ReLU or Leaky ReLU NNs (cf. e.g., Hanin and Rolnick 2019;Sattelberg et al 2020). Note that our method is still generally applicable, since any activation function can be approximated by piecewise linear functions (Hu et al 2020;Liao et al 2023).…”
Section: How Can Pds Representing Aleatoric Uncertainty In Input Data...mentioning
confidence: 99%
“…Without loss of generality, we exclusively focus on NNs of such structure for the remainder of this paper. Following Sattelberg et al (2020), we recall definitions of the necessary mathematical concepts of polytopes and piecewise linear functions. The piecewise linear structure of NNs has been recognized in literature and is key in the discussion of expressiveness and complexity of NNs (e.g., Hanin and Rolnick 2019).…”
Section: Uncertainty Propagation Via Piecewise Linear Transformationmentioning
confidence: 99%
“…cross-entropy loss, mean squared loss, etc. Moreover, we further assume that training samples in S belong to the same class, which is a reasonable assumption to make for ReLU networks [47].…”
Section: Validity Of the Attack Model For Relu Networkmentioning
confidence: 99%
“…which means the student model trust the teacher and can perfectly learn the knowledge defined by the distillation objective. The second is the local linearity assumption, which assumes that neural networks with piece-wise linear activation functions are locally linear (Sattelberg et al (2020); Croce et al (2019); ) and the certified robust area falls into these piece-wise linear regions. These two assumptions collaboratively build an ideal situation of knowledge distillation in which we can derive a strong property of KDIGA that the certified robustness of the student model can be as good as or even better than that of the teacher model.…”
Section: Preservation Of Certified Robustnessmentioning
confidence: 99%