2014
DOI: 10.1016/j.apm.2014.05.018
|View full text |Cite
|
Sign up to set email alerts
|

Multivariate Jackson-type inequality for a new type neural network approximation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(16 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…where r, c 0 > 0 and x denotes the Euclidean norm of x. Denote by Lip (r,c0) the family of (r, c 0 )-Lipschitz functions satisfying (6). The Lipschitz property describes the smoothness information of f and has been adopted in vast literature [7], [28], [38], [22], [9] to quantify the approximation ability of neural networks. Denote by Lip (N,s,r,c0) the set of all f ∈ Lip (r,c0) which is s-sparse in N d partitions.…”
Section: B Sparse Approximation For Deep Netsmentioning
confidence: 99%
“…where r, c 0 > 0 and x denotes the Euclidean norm of x. Denote by Lip (r,c0) the family of (r, c 0 )-Lipschitz functions satisfying (6). The Lipschitz property describes the smoothness information of f and has been adopted in vast literature [7], [28], [38], [22], [9] to quantify the approximation ability of neural networks. Denote by Lip (N,s,r,c0) the set of all f ∈ Lip (r,c0) which is s-sparse in N d partitions.…”
Section: B Sparse Approximation For Deep Netsmentioning
confidence: 99%
“…Resonance condition (24) is fulfilled due to the definition of h n : For each g ∈ V 4n there exits at least one point z 0 ∈ X 4n such that…”
Section: Sharpness Due To Counterexamplesmentioning
confidence: 99%
“…They do not consist of linear combinations of ridge functions. A special network with four layers is introduced in [24] to obtain a Jackson estimate in terms of a first order modulus of smoothness.…”
Section: Introductionmentioning
confidence: 99%
“…From approximation theory viewpoints, parameters of FNN can be either determined via training [4] or constructed based on data directly [27]. However, the "construction" idea for FNN did not attract researchers' attention in the machine learning community, although various FNNs possessing optimal approximation property have been constructed [1], [7], [9], [10], [24], [27], [30]. The main reason is that the constructed FNN possesses superior learning capability for noisefree data only, which is usually impossible for real world applications.…”
Section: Fnn Can Be Mathematically Represented Bymentioning
confidence: 99%
“…The construction starts with selecting a set of centers (not the samples) with good geometrical distribution and generating the Voronoi partitions [15] based on them. Then, an FNN is constructed according to the well-developed constructive technique in approximation theory [8], [24], [27] by averaging outputs whose corresponding inputs are in the same partition. As the constructed FNN suffers from the well-known saturation phenomenon in the sense that the learning rate cannot be improved once the smoothness of the regression function goes beyond a specific value [19], we present a Landweber-type iterative method to overcome saturation in the last step.…”
Section: Fnn Can Be Mathematically Represented Bymentioning
confidence: 99%