2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854606
|View full text |Cite
|
Sign up to set email alerts
|

Sparse kernel recursive least squares using L<inf>1</inf> regularization and a fixed-point sub-iteration

Abstract: A new kernel adaptive filtering (KAF) algorithm, namely the sparse kernel recursive least squares (SKRLS), is derived by adding a l 1 -norm penalty on the center coefficients to the least squares (LS) cost (i.e. the sum of the squared errors). In each iteration, the center coefficients are updated by a fixed-point sub-iteration. Compared with the original KRLS algorithm, the proposed algorithm can produce a much sparser network, in which many coefficients are negligibly small. A much more compact structure can… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…However, the LASSO method is inherently a batch method and is unsuitable for online learning. Instead, we resort to the fixed-point subiteration method introduced in [ 13 ]. We first use the sign function sign⁡( α t ) to replace sgn⁡( α t ) in ( 33 ).…”
Section: Regularized Oskrlstd Algorithmsmentioning
confidence: 99%
See 3 more Smart Citations
“…However, the LASSO method is inherently a batch method and is unsuitable for online learning. Instead, we resort to the fixed-point subiteration method introduced in [ 13 ]. We first use the sign function sign⁡( α t ) to replace sgn⁡( α t ) in ( 33 ).…”
Section: Regularized Oskrlstd Algorithmsmentioning
confidence: 99%
“…From ( 16 ), A t −1 also requires removing some rows and columns. Unfortunately, we cannot use the method in [ 30 ] to do this like Chen et al in [ 13 ], since A t −1 is not a symmetry matrix. Considering that b t will remove the corresponding elements if 𝒟 t is pruned, we directly perform Ψ ℐ t ( A t −1 ) to remove the rows and columns indexed by ℐ t .…”
Section: Regularized Oskrlstd Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations
“…To restrict the growth of the weight network, several online sparsification criteria were proposed for KAFs to select valid samples in the learning process, such as approximate linear dependency (ALD) [5], the novelty criterion (NC) [11], the surprise criterion (SC) [12], the coherence criterion (CC) [13], the quantization criterion [14], and sparsity-promoting regularization [15,16]. Among these criteria, the quantization criterion-based kernel least mean square (QKLMS) algorithm can perform better in the tradeoff between steady-state error and computational complexity [14].…”
Section: Introductionmentioning
confidence: 99%