2018
DOI: 10.1007/s00034-018-1006-2
|View full text |Cite
|
Sign up to set email alerts
|

Kernel Least Mean Square Based on the Nyström Method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…As we know, a KAF cannot converge when the value of step-size is greater than η + , where η + is the upper bound of step-size and its default value is 1 here. Therefore, if εη 1 (k) > η + , the value of step-sizes in Equation (15) will not be updated, but the weight transfer operation is still executed by SHEN ET AL.…”
Section: Combined Weight Transfer Strategymentioning
confidence: 99%
See 1 more Smart Citation
“…As we know, a KAF cannot converge when the value of step-size is greater than η + , where η + is the upper bound of step-size and its default value is 1 here. Therefore, if εη 1 (k) > η + , the value of step-sizes in Equation (15) will not be updated, but the weight transfer operation is still executed by SHEN ET AL.…”
Section: Combined Weight Transfer Strategymentioning
confidence: 99%
“…Although the sparsification methods [4,13] can effectively alleviate the network growth problem, the network size cannot be fixed in advance and the computational complexity still increases over time. Unlike the sparsification methods, the Nyström method [14][15][16] uses a subset of samples to obtain a low rank matrix approximation to the kernel matrix in a fixed dimensional space. However, the selection of samples used for approximation is crucial to the accuracy of the Nyström method.…”
Section: Introductionmentioning
confidence: 99%
“…In a traditional method of system identification, the reason for applying gradient-based algorithms for estimating parameters in a model is because it minimizes the mean square error (MSE) for both model and system [3]. For instance, an iterative algorithm based on a gradient descent algorithm was employed to identify the Hammerstein model [4].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, the batch form of these algorithms usually requires a large amount of memory and computational complexity [27]. Kernel adaptive filters (KAFs) for online kernel processing have been studied extensively [9][10][11][12]14,15,[17][18][19][20]26,31,[33][34][35]37,38], including the kernel least mean square (KLMS) [17], kernel affine projection (KAP) [18,26], kernel conjugate gradient (KCG) [38] and kernel recursive least squares (KRLS) [14].…”
Section: Introductionmentioning
confidence: 99%