2019
DOI: 10.1002/mma.5575
|View full text |Cite
|
Sign up to set email alerts
|

A note on the expressive power of deep rectified linear unit networks in high‐dimensional spaces

Abstract: We investigate the ability of deep deep rectified linear unit (ReLU) networks to approximate multivariate functions. Specially, we establish the approximation error estimate on a class of bandlimited functions; in this case, ReLU networks can overcome the “curse of dimensionality.”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(24 citation statements)
references
References 8 publications
(15 reference statements)
0
24
0
Order By: Relevance
“…This gives some intuition that in many cases the above upper bound is not a tight one. We note more recent works such as [4], [24] and the references contained there in, show some of the new developments on these size bounds. Moreover, we can easily obtain the smaller network size estimate when the Lyapunov has compositional structure or has lower dimensional structure.…”
Section: Propositionmentioning
confidence: 93%
“…This gives some intuition that in many cases the above upper bound is not a tight one. We note more recent works such as [4], [24] and the references contained there in, show some of the new developments on these size bounds. Moreover, we can easily obtain the smaller network size estimate when the Lyapunov has compositional structure or has lower dimensional structure.…”
Section: Propositionmentioning
confidence: 93%
“…Most existing approximation theories for deep neural networks so far focus on the approximation rate in the number of parameters W (Cybenko, 1989;Hornik, Stinchcombe, & White, 1989;Barron, 1993;Liang & Srikant, 2016;Yarotsky, 2017Yarotsky, , 2018Poggio, Mhaskar, Rosasco, Miranda, & Liao, 2017;Weinan & Wang, 2018;Petersen & Voigtlaender, 2018;Chui, Lin, & Zhou, 2018;Nakada & Imaizumi, 2019;Gribonval, Kutyniok, Nielsen, & Voigtlaender, 2019;Gühring, Kutyniok, & Petersen, 2019;Chen, Jiang, Liao, & Zhao, 2019;Li, Lin, & Shen, 2019;Suzuki, 2019;Bao et al, 2019;Opschoor, Schwab, & Zech, 2019;Yarotsky & Zhevnerchuk, 2019;Bölcskei, Grohs, Kutyniok, & Petersen, 2019;Montanelli & Du, 2019;Chen & Wu, 2019;Zhou, 2020;Montanelli & Yang, 2020;Montanelli, Yang, & Du, in press). From the point of view of theoretical difficulty, controlling two variables, N and L, in our theory is more challenging than controlling one variable W in the literature.…”
Section: Approximation Rates In O(n) and O(l) Versus O(w )mentioning
confidence: 99%
“…For example, the exponential convergence was studied for polynomials (Yarotsky, 2017;Montanelli et al, in press;Lu et al, 2020), smooth functions (Montanelli et al, in press;Liang & Srikant, 2016), analytic functions (Weinan & Wang, 2018), and functions admitting a holomorphic extension to a Bernstein polyellipse (Opschoor et al, 2019). For another example, no curse of dimensionality occurs, or the curse is lessened for Barron spaces (Barron, 1993;Weinan et al, 2019;Weinan & Wojtowytsch, 2020), Korobov spaces (Montanelli & Du, 2019), band-limited functions (Chen & Wu, 2019;Montanelli et al, in press), compositional functions (Poggio et al, 2017), and smooth functions (Yarotsky & Zhevnerchuk, 2019;Lu et al, 2020;Montanelli & Yang, 2020;Yang & Wang, 2020).…”
Section: Further Interpretation Of Our Theorymentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, for anyx ∈ [−M − 1, M + 1], g δ (x) ⇉ x 2 + M +1 2 ⋅ x M +1 = ReLU(x) as δ → 0 + . Define g δ (x) ∶=g δ (x) −g δ (x − η 0 ) η 0 for any x ∈ R.Clearly,g δ ∈ Hσ(10, 4) implies g δ ∈ Hσ(20,4). For anyx ∈ [−M, M ], we have x, x − η 0 ∈ [−M − 1, M + 1], implying g δ (x) =g δ (x) −g δ (x − η 0 ) η 0 ⇉ ReLU(x) − ReLU(x − η 0 ) η 0 = g(x) as δ → 0 + .…”
mentioning
confidence: 99%