2019
DOI: 10.1142/s0219530519410021
|View full text |Cite
|
Sign up to set email alerts
|

Error bounds for approximations with deep ReLU neural networks in Ws,p norms

Abstract: We analyze approximation rates of deep ReLU neural networks for Sobolev-regular functions with respect to weaker Sobolev norms. First, we construct, based on a calculus of ReLU networks, artificial neural networks with ReLU activation functions that achieve certain approximation rates. Second, we establish lower bounds for the approximation by ReLU neural networks for classes of Sobolev-regular functions. Our results extend recent advances in the approximation theory of ReLU networks to the regime that is most… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
108
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 149 publications
(113 citation statements)
references
References 47 publications
(83 reference statements)
5
108
0
Order By: Relevance
“…for almost every x ∈ R d with u i (RΦ(x)) = 0. Since a finite union of nullsets is again a nullset, this proves the claim (18). The lemma follows by induction over the layers K = 1, .…”
Section: Settingsupporting
confidence: 54%
See 1 more Smart Citation
“…for almost every x ∈ R d with u i (RΦ(x)) = 0. Since a finite union of nullsets is again a nullset, this proves the claim (18). The lemma follows by induction over the layers K = 1, .…”
Section: Settingsupporting
confidence: 54%
“…In particular it allows for a stability result, i.e. Lemma III.3, the application of which will be discussed in Section V. We would also like to mention a very recent work [18] about approximation in Sobolev norms, where they deal with the issue by using a general bound for the Sobolev norm of the composition of functions from the Sobolev space W 1,∞ . Note however that this approach leads to a certain factor depending on the dimensions of the domains of the functions, which can be avoided with our method.…”
Section: Introductionmentioning
confidence: 99%
“…For m = 1 and k = 0, Definition 6 corroborates that [87]. Accordingly, we hereby define the Lebesgue spaces of m vector-valued functions.…”
Section: B Functional Norms and Spacesmentioning
confidence: 68%
“…In the definition of neural networks, we distinguish between a neural network as a set of weights and an associated function that we call the realization of the neural network. This formal approach was introduced in [48], but we recall here a slightly different formulation of [28] for neural networks that allow so-called skip connections. Let : R → R be the ReLU, i.e., = max{0, x} and let be a NN as above.…”
Section: Neural Networkmentioning
confidence: 99%