2013
DOI: 10.1109/lcomm.2013.011113.121586
|View full text |Cite
|
Sign up to set email alerts
|

Non-Uniform Norm Constraint LMS Algorithm for Sparse System Identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
34
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(34 citation statements)
references
References 11 publications
0
34
0
Order By: Relevance
“…In order to fully exploit the sparsity property of the multi-path channel, we propose a group-constrained MCC algorithm by exerting the l 0 -norm penalty on the group of large channel coefficients and forcing the l 1 -norm penalty on the group of small channel coefficients. Herein, a non-uniform norm is used to split the non-uniform penalized algorithms into a large group and a small group, and the non-uniform norm is defined as [37,38] …”
Section: The Proposed Group-constrained Sparse MCC Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to fully exploit the sparsity property of the multi-path channel, we propose a group-constrained MCC algorithm by exerting the l 0 -norm penalty on the group of large channel coefficients and forcing the l 1 -norm penalty on the group of small channel coefficients. Herein, a non-uniform norm is used to split the non-uniform penalized algorithms into a large group and a small group, and the non-uniform norm is defined as [37,38] …”
Section: The Proposed Group-constrained Sparse MCC Algorithmsmentioning
confidence: 99%
“…For the "large" group, l 0 -norm penalty is used to count the number of active channel coefficients, and l 1 -norm penalty is adopted to uniformly attract inactive coefficients to zero for the "small" group. To effectively integrate these two groups into (23), we define [37] …”
Section: The Proposed Group-constrained Sparse MCC Algorithmsmentioning
confidence: 99%
“…Many algorithms have been developed to solve the problem in (3) in the literature [12][13][14][15][16][17][18][19][20], where the mean square error (MSE) [21] criterion based on second-order statistics has been employed for these algorithms, which show their optimality when e is Gaussian noise. In practical applications, however the transmitted signals are distorted by not only Gaussian noise, but also other kinds of noise, such as burst noise and high noise.…”
Section: Introductionmentioning
confidence: 99%
“…In general, a sparse adaptive filtering algorithm can be derived by incorporating a sparsity penalty term (SPT), such as the l0-norm, into a traditional adaptive algorithm. Typical examples of sparse adaptive filtering algorithms include sparse least mean square (LMS) [1][2][3][4], sparse affine projection algorithms (APA) [5], sparse recursive least squares (RLS) [6], and their variations [7][8][9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%