2013
DOI: 10.1007/s11760-013-0580-9
|View full text |Cite
|
Sign up to set email alerts
|

Convergence analysis of the zero-attracting variable step-size LMS algorithm for sparse system identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
12
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 8 publications
2
12
0
Order By: Relevance
“…[11,12,25] that the fourthorder moment of a Gaussian variable is three times the variance squared and that S(n) is symmetric, and under the Assumption 3, we get…”
Section: −ρ(G(n) − E[g(n)])mentioning
confidence: 96%
See 3 more Smart Citations
“…[11,12,25] that the fourthorder moment of a Gaussian variable is three times the variance squared and that S(n) is symmetric, and under the Assumption 3, we get…”
Section: −ρ(G(n) − E[g(n)])mentioning
confidence: 96%
“…[11,12,25]. The Assumption 3 is valid when the weight vector w(n) lies in the neighbourhood of the optimal solution w * (n).…”
Section: Assumption 5 the Expectation E[ F (E(∞))] Is Upper Boundedmentioning
confidence: 99%
See 2 more Smart Citations
“…After that, zero-attracting techniques have been introduced into the proportionate adaptive filter algorithms [5,16,18], leaky least mean square [19] and normalized least mean square algorithms [7] to form desired zero attractors [6,11,17]. Their convergence characteristics are analyzed in [10,22]. Recently, a smooth approximation l 0 -norm method has been introduced into the cost function of the conventional LMS and AP algorithms to further improve the estimation performance, and these are known as l 0 -LMS and l 0 -AP algorithms [8,9,12,13].…”
Section: Introductionmentioning
confidence: 99%