2019
DOI: 10.3934/fods.2020012
|View full text |Cite
|
Sign up to set email alerts
|

Spectral methods to study the robustness of residual neural networks with infinite layers

Abstract: Recently, neural networks (NN) with an infinite number of layers have been introduced. Especially for these very large NN the training procedure is very expensive. Hence, there is interest to study their robustness with respect to input data to avoid unnecessarily retraining the network. Typically, model-based statistical inference methods, e.g. Bayesian neural networks, are used to quantify uncertainties. Here, we consider a special class of residual neural networks and we study the case, when the number of l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 63 publications
(73 reference statements)
0
6
0
Order By: Relevance
“…matrix K ∈ ℝ m×m , a factor 𝜈 > 0 , and a m−dimensional standard Wiener process dW, the corresponding dynamics reads Mathematically, the feature of presenting a possibly large number N of input and target data to the ANN can be exploited in terms of the mean-field limit [21,[30][31][32][33]. Instead of considering input data (0) i individually, we consider a statistical description of these data.…”
Section: Mathematical Formulation Of Residual Neural Network (Resnets)mentioning
confidence: 99%
See 1 more Smart Citation
“…matrix K ∈ ℝ m×m , a factor 𝜈 > 0 , and a m−dimensional standard Wiener process dW, the corresponding dynamics reads Mathematically, the feature of presenting a possibly large number N of input and target data to the ANN can be exploited in terms of the mean-field limit [21,[30][31][32][33]. Instead of considering input data (0) i individually, we consider a statistical description of these data.…”
Section: Mathematical Formulation Of Residual Neural Network (Resnets)mentioning
confidence: 99%
“…In [15], the time-continuous version of a ResNet is studied and different temporal discretization schemes are discussed. There are also studies on application of kinetic methods to ResNets [18][19][20][21]. For example, in [20], the authors consider the limit of infinitely many neurons and gradient steps in the case of one hidden layer.…”
Section: Introductionmentioning
confidence: 99%
“…Since the two PDEs are decoupled, they can be solved simultaneously. Application of the method of lines to (22) on discrete cells Ω j , defining a discretization of the physical domain Ω, leads to the coupled system of ODEs…”
Section: Numerical Discretization Schemementioning
confidence: 99%
“…However, neither the training nor the well-posedness have been analyzed so far. Using a mean-field or kinetic description of large-scale neural networks has so far only be discussed for particular examples in a few recent manuscripts [17][18][19][20][21][22]. A general investigation in particular in view of large input data is to the best of our knowledge still open.…”
Section: Introductionmentioning
confidence: 99%
“…However, neither the training nor the well-posedness have been analyzed so far. Using a mean-field or kinetic description of large-scale neural networks has so far only be discussed for particular examples in a few recent manuscripts [2,8,29,36,37,39]. A general investigation in particular in view of large input data is to the best of our knowledge still open.…”
Section: Introductionmentioning
confidence: 99%