2013
DOI: 10.1007/s10994-013-5406-z
|View full text |Cite
|
Sign up to set email alerts
|

Almost optimal estimates for approximation and learning by radial basis function networks

Abstract: This paper quantifies the approximation capability of radial basis function networks (RBFNs) and their applications in machine learning theory. The target is to deduce almost optimal rates of approximation and learning by RBFNs. For approximation, we show that for large classes of functions, the convergence rate of approximation by RBFNs is not slower than that of multivariate algebraic polynomials. For learning, we prove that, using the classical empirical risk minimization, the RBFNs estimator can theoretica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(25 citation statements)
references
References 39 publications
0
22
1
Order By: Relevance
“…The approximation rate O(N −2α/d ) is tight in terms of N and increasing L cannot improve the approximation rate in N. The success of deep Figure 1: A summary of existing and our new results on the approximation rate of ReLU FNNs for continuous functions. Existing results [18,25,40,44,62,65,67] are applicable in the areas in , , and ; our new result is suitable for almost all areas when L ≥ 2.…”
Section: Introductionmentioning
confidence: 71%
See 1 more Smart Citation
“…The approximation rate O(N −2α/d ) is tight in terms of N and increasing L cannot improve the approximation rate in N. The success of deep Figure 1: A summary of existing and our new results on the approximation rate of ReLU FNNs for continuous functions. Existing results [18,25,40,44,62,65,67] are applicable in the areas in , , and ; our new result is suitable for almost all areas when L ≥ 2.…”
Section: Introductionmentioning
confidence: 71%
“…For another example, given an approximation error ε, [54] proved the existence of a ReLU FNN with a constant but still unknown number of layers approximating a C β function within the target error. These works can be divided into two cases: 1) FNNs with varying width and only one hidden layer [18,25,40,65] (visualized by the region in in Fig. 1); 2) FNNs with a fixed width of O(d) and a varying depth larger than an unknown number L [44,67] (represented by the region in in Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Refs. [158][159][160][161][162], this technique has only been applied to limited prognostic studies. iii.…”
Section: Artificial Neural Network (Ann)mentioning
confidence: 99%
“…Given the importance of this issue, establishing the universal approximation ability of LS-SVR as a universal approximation, like fuzzy systems and neural networks, is significant and should be demonstrated. In this study, the universal approximation theorem for Gaussian kernel is briefly reviewed and meticulous details of the universal approximation theorems for the other kernels which are used in the literature are referred to [38][39][40].…”
Section: Universal Approximation Property Of Ls-svrmentioning
confidence: 99%