2019
DOI: 10.48550/arxiv.1912.04310
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient approximation of high-dimensional functions with neural networks

Patrick Cheridito,
Arnulf Jentzen,
Florian Rossmannek

Abstract: In this paper, we develop an approximation theory for deep neural networks that is based on the concept of a catalog network. Catalog networks are generalizations of standard neural networks in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of continuous functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be app… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…In particular, KL-HMC takes about 4 mins compared to about 20 mins for B-PINN-HMC. However, we remark that the truncated KL expansion would suffer from the "curse of dimensionality" when approximating high dimensional functions, while deep neural networks are known to be efficient for high-dimensional function approximation [27].…”
Section: Results and Comparisonsmentioning
confidence: 99%
“…In particular, KL-HMC takes about 4 mins compared to about 20 mins for B-PINN-HMC. However, we remark that the truncated KL expansion would suffer from the "curse of dimensionality" when approximating high dimensional functions, while deep neural networks are known to be efficient for high-dimensional function approximation [27].…”
Section: Results and Comparisonsmentioning
confidence: 99%
“…Next note that Cheridito et al [7,Proposition II.5] assures that there exists ∈ which satisfies for all x, y ∈ R I( 1 ) that R a ( ) ∈ C(R 2I( 1 ) , R 2O( 1 ) ), R a ( )(x, y) = (R a ( 1 )(x), R a ( 2 )(y)), and…”
Section: Sums Of Annsmentioning
confidence: 99%
“…The two fractional errors shown are for three and four interpolation points in each of the dimensions, respectively. In contrast, NNs are known to mitigate the "curse of dimensionality" 44 .…”
Section: Order-of-magnitude Reduction In Training Datamentioning
confidence: 99%