1992
DOI: 10.1016/s0893-6080(05)80009-7
|View full text |Cite
|
Sign up to set email alerts
|

Approximation of continuous functions on Rd by linear combinations of shifted rotations of a sigmoid function with and without scaling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
40
0
1

Year Published

1999
1999
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 119 publications
(44 citation statements)
references
References 5 publications
3
40
0
1
Order By: Relevance
“…When the activation function is non-linear, for instance to use a binary step function or a sigmoid function instead, the SLFN with at most k hidden neurons can learn k distinct observations with zero error [30]. In such case, the both weights w and β need to be adjusted carefully with various methods to ensure its approximation capability [32].…”
Section: Feedforward Neural Networkmentioning
confidence: 99%
“…When the activation function is non-linear, for instance to use a binary step function or a sigmoid function instead, the SLFN with at most k hidden neurons can learn k distinct observations with zero error [30]. In such case, the both weights w and β need to be adjusted carefully with various methods to ensure its approximation capability [32].…”
Section: Feedforward Neural Networkmentioning
confidence: 99%
“…In [57], the C -density property was proved for any continuous bounded and nonconstant computational unit, and the L p -density property was proved for any bounded and nonconstant computational unit. The C-density property for functions on the whole R d was investigated in [50] and [51].…”
Section: Being Sigmoidal Is Not Substantialmentioning
confidence: 99%
“…In the papers [51] and [61], various density properties were proved for monotone sigmoidal functions, using only weights with a norm equal to 1. The case of continuous sigmoidal computational units and of weights and thresholds taking only integer values was addressed in [73].…”
Section: Restricting the Parameter Setmentioning
confidence: 99%
“…In many applications, it is convenient to take the activation function σ as a sigmoidal function which is defined as lim t→−∞ σ(t) = 0 and lim t→+∞ σ(t) = 1. The literature on neural networks abounds with the use of such functions and their superpositions (see, e.g., [2,4,6,8,10,11,13,15,20,22,29]). The possibility of approximating a continuous function on a compact subset of the real line or d-dimensional space by SLFNs with a sigmoidal activation function has been well studied in a number of papers.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Stinchcombe and White [34] showed that SLFNs with a polygonal, polynomial spline or analytic activation function and a bounded set of weights have the universal approximation property. Ito [20] investigated this property of networks using monotone sigmoidal functions (tending to 0 at minus infinity and 1 at infinity), with only weights located on the unit sphere. In [16,17,19], one of the coauthors considered SLFNs with weights varying on a restricted set of directions and gave several necessary and sufficient conditions for good approximation by such networks.…”
Section: Introductionmentioning
confidence: 99%