2021
DOI: 10.48550/arxiv.2104.08938
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the approximation of functions by tanh neural networks

Tim De Ryck,
Samuel Lanthaler,
Siddhartha Mishra

Abstract: We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better rates than much deeper ReLU neural networks.

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 58 publications
0
1
0
Order By: Relevance
“…There is still open question whether the approximation error of large scale (deep and wide) generators is small. Recently, Ryck et al [17] proved that shallow (only 1-hidden layer) but wide tanh neural networks can approximate Sobolev regular and analytic functions. His results reveal that wider neural networks have larger capacity.…”
Section: Convergence Analysis Of Empirical Lossmentioning
confidence: 99%
“…There is still open question whether the approximation error of large scale (deep and wide) generators is small. Recently, Ryck et al [17] proved that shallow (only 1-hidden layer) but wide tanh neural networks can approximate Sobolev regular and analytic functions. His results reveal that wider neural networks have larger capacity.…”
Section: Convergence Analysis Of Empirical Lossmentioning
confidence: 99%