2018
DOI: 10.1142/s0219530518500203
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ

Abstract: We estimate the expressive power of certain deep neural networks (DNNs for short) on a class of countably-parametric, holomorphic maps [Formula: see text] on the parameter domain [Formula: see text]. Dimension-independent rates of best [Formula: see text]-term truncations of generalized polynomial chaos (gpc for short) approximations depend only on the summability exponent of the sequence of their gpc expansion coefficients. So-called [Formula: see text]-holomorphic maps [Formula: see text], with [Formula: see… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
181
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 208 publications
(183 citation statements)
references
References 27 publications
2
181
0
Order By: Relevance
“…and any (fixed) f * ∈ G, there exists an f ∈ H such that f * (z) = (π M f (x) − y) 2 − (f ρ (x) − y) 2 . Therefore, it follows from (8) that…”
Section: Proof Of Theoremmentioning
confidence: 99%
See 1 more Smart Citation
“…and any (fixed) f * ∈ G, there exists an f ∈ H such that f * (z) = (π M f (x) − y) 2 − (f ρ (x) − y) 2 . Therefore, it follows from (8) that…”
Section: Proof Of Theoremmentioning
confidence: 99%
“…Depth and structure of deep nets are two crucial factors in promoting the development of deep learning [5]. The necessity of depth has been rigorously verified from the viewpoints of approximation theory and representation theory, via showing the advantages of deep nets in localized approximation [6], sparse approximation in the frequency domain [7,8], sparse approximation in the spatial domain [9], manifold learning [10,11], hierarchical structures grasping [12,13], piecewise smoothness realization [14], universality with bounded number of parameters [15,16] and rotation invariance protection [17]. We refer the readers to Pinkus [18] and Poggio et al [19] for details on the theoretical advantages of deep nets over shallow neural networks (shallow nets).…”
Section: Introductionmentioning
confidence: 99%
“…high-dimensional probability and statistics [90], [91] and empirical process theory [92]-core tools of ToDL, high-dimensional bounds can provide useful insights on approximation with deep ReLU FNNs fed with input data manifesting high-dimension. Toward this timely need, [93] and [94] present FNN-based approximations applicable for parametric partial differential equations. Therefore, following the lead of [93], [94] and [85]- [89], we set out to derive error bounds for a matrix-vector product approximation with deep ReLU FNNs.…”
Section: A Related Work and Motivationmentioning
confidence: 99%
“…Toward this timely need, [93] and [94] present FNN-based approximations applicable for parametric partial differential equations. Therefore, following the lead of [93], [94] and [85]- [89], we set out to derive error bounds for a matrix-vector product approximation with deep ReLU FNNs. This is also motivated by the fact that a matrixvector product models various research problems of wireless communications and signal processing; network science and graph signal processing; and network neuroscience and brain physics [95], [96].…”
Section: A Related Work and Motivationmentioning
confidence: 99%
“…Statistical and stochastic modeling principles were applied in deep learning algorithms to strengthen the object search capabilities or for improved model fitting in uncertainty (57,59). Boltzmann machines assist in the deep understanding of the data by linking layer level structured data and then by estimating model parameters through maximum likelihood methods (60,61).…”
Section: Appendix Iii: Machine Learning Versus Deep Learning In Compumentioning
confidence: 99%