1996
DOI: 10.1162/neco.1996.8.1.164
|View full text |Cite
|
Sign up to set email alerts
|

Neural Networks for Optimal Approximation of Smooth and Analytic Functions

Abstract: We prove that neural networks with a single hidden layer are capable of providing an optimal order of approximation for functions assumed to possess a given number of derivatives, if the activation function evaluated by each principal element satisfies certain technical conditions. Under these conditions, it is also possible to construct networks that provide a geometric order of approximation for analytic target functions. The permissible activation functions include the squashing function (1 − e−x)−1 as well… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
200
0
4

Year Published

1999
1999
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 315 publications
(211 citation statements)
references
References 10 publications
2
200
0
4
Order By: Relevance
“…As early as the 1990s, researchers studied the approximation ability of ANNs especially for FNNs and RBFNs [16], [17], [23]- [28]. They obtained a general approximation theorem: FNNs and RBFNs whose scales are unlimited or • Gaussian radial basis function , , .…”
Section: B Analysis Of Hssvms For Regression Estimationmentioning
confidence: 99%
“…As early as the 1990s, researchers studied the approximation ability of ANNs especially for FNNs and RBFNs [16], [17], [23]- [28]. They obtained a general approximation theorem: FNNs and RBFNs whose scales are unlimited or • Gaussian radial basis function , , .…”
Section: B Analysis Of Hssvms For Regression Estimationmentioning
confidence: 99%
“…This type of result is well known for spline functions, and has recently been demonstrated for neural networks (Mhaskar, 1996) and mixture of expert architectures (Zeevi et al, 1998). Using the results of Theorem 6.1, and assuming that the optimal memory size d is known, as in the nonparametric setting above, we can compute the value for the complexity index n which yields fastest rates of convergence.…”
Section: Structural Risk Minimization For Time Seriesmentioning
confidence: 70%
“…During the last decade a great deal of research in the field of approximation theory has been done to approximate real valued functions using artificial neural networks (ANN's) with one or more hidden layers, each neuron evaluating a sigmoidal or radial basis function (see [1][2][3][4][5][6][7][8][9]). A typical result in this context is a density result showing that an ANN can approximate a given function in a given class to any degree of accuracy provided that enough number of neurons can be used.…”
Section: Introductionmentioning
confidence: 99%