2014
DOI: 10.1016/j.sigpro.2014.03.048
|View full text |Cite
|
Sign up to set email alerts
|

Reweighted l1-norm penalized LMS for sparse channel estimation and its analysis

Abstract: A new reweighted l 1 -norm penalized least mean square (LMS) algorithm for sparse channel estimation is proposed and studied in this paper. Since standard LMS algorithm does not take into account the sparsity information about the channel impulse response (CIR), sparsity-aware modifications of the LMS algorithm aim at outperforming the standard LMS by introducing a penalty term to the standard LMS cost function which forces the solution to be sparse. Our reweighted l 1 -norm penalized LMS algorithm introduces … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0
1

Year Published

2015
2015
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(36 citation statements)
references
References 25 publications
0
35
0
1
Order By: Relevance
“…From the definition of g(n) for ZALMP and RZALMP, we note that the vector E[g(∞)] is bounded between −1 and 1 and similar demonstration can be read in [7,26] …”
Section: Mean Performancementioning
confidence: 90%
“…From the definition of g(n) for ZALMP and RZALMP, we note that the vector E[g(∞)] is bounded between −1 and 1 and similar demonstration can be read in [7,26] …”
Section: Mean Performancementioning
confidence: 90%
“…On the basis of the previously reported sparse-aware LMS and AP algorithms [6][7][8][9][11][12][13]15,21,22,24], a p-norm penalty is adopted and incorporated into the cost function J AP (n) of the conventional AP algorithm, which is denoted as p-norm penalized AP (PAP) algorithm and is described as follows…”
Section: Proposed Sparse Ap Algorithmsmentioning
confidence: 99%
“…After that, zero-attracting techniques have been introduced into the proportionate adaptive filter algorithms [5,16,18], leaky least mean square [19] and normalized least mean square algorithms [7] to form desired zero attractors [6,11,17]. Their convergence characteristics are analyzed in [10,22]. Recently, a smooth approximation l 0 -norm method has been introduced into the cost function of the conventional LMS and AP algorithms to further improve the estimation performance, and these are known as l 0 -LMS and l 0 -AP algorithms [8,9,12,13].…”
Section: Introductionmentioning
confidence: 99%
“…For sparse models, the ZA-LMS achieves better steady-state performance than the standard LMS. Motivated by reweighting in compressive sampling, the RZA-LMS was proposed [6] [17]. Besides, NNCLMS [7] and DWZANLMS [23] were given to further improve the performance of l 1 -norm constraint LMS, respectively.…”
Section: Introductionmentioning
confidence: 99%