1994
DOI: 10.1162/neco.1994.6.6.1262
|View full text |Cite
|
Sign up to set email alerts
|

Degree of Approximation Results for Feedforward Networks Approximating Unknown Mappings and Their Derivatives

Abstract: Recently Barron (1993) has given rates for hidden layer feedforward networks with sigmoid activation functions approximating a class of functions satisfying a certain smoothness condition. These rates do not depend on the dimension of the input space. We extend Barron's results to feedforward networks with possibly nonsigmoid activation functions approximating mappings and their derivatives simultaneously. Our conditions are similar but not identical to Barron's, but we obtain the same rates of approximation, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
99
0
1

Year Published

1998
1998
2013
2013

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 143 publications
(105 citation statements)
references
References 7 publications
5
99
0
1
Order By: Relevance
“…The latter, therefore, shares all the advantages of the former, including the very good convergence rate [11] and its application in Sobolev space [17].…”
Section: Approximation With Logistic Differential Equationsmentioning
confidence: 89%
“…The latter, therefore, shares all the advantages of the former, including the very good convergence rate [11] and its application in Sobolev space [17].…”
Section: Approximation With Logistic Differential Equationsmentioning
confidence: 89%
“…Also the hidden layer has 6 nodes, in each node; weighted inputs are processed by a transfer function as shown in Fig. 4 (Hornik et al 1994).…”
Section: Model Descriptionmentioning
confidence: 99%
“…they require a smaller number of adjustable parameters than approximators which are linear with respect to the parameters (such as polynomials for instance, or radial basis function with fixed centers and variances) (Hornik et al 1994). Specifically, the number of parameters required increases linearly with the number of input variables, whereas it grows exponentially for approximators which are linear with respect to the parameters.…”
Section: Some Feedforward Neural Network Are Parsimonious Universal mentioning
confidence: 99%