2014
DOI: 10.1007/s11760-013-0610-7
|View full text |Cite
|
Sign up to set email alerts
|

A $$p$$ p -norm variable step-size LMS algorithm for sparse system identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
27
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 34 publications
(30 citation statements)
references
References 17 publications
3
27
0
Order By: Relevance
“…In the first three sparse systems, there are 32 coefficients in the sparse system and the number of the nonzero taps K are 1, 4 and 8 for the first, second and third experiments, respectively. In these three experiments, the positions of the nonzero taps are distributed randomly within the length of the sparse channel and these nonzero taps are set to 1, which is similar to [1,2,8,13,15,24]. Both the driven input signal and the system independent additive noise are assumed to be white Gaussian distribution with zero mean and variances of 1 and 0.001, respectively.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In the first three sparse systems, there are 32 coefficients in the sparse system and the number of the nonzero taps K are 1, 4 and 8 for the first, second and third experiments, respectively. In these three experiments, the positions of the nonzero taps are distributed randomly within the length of the sparse channel and these nonzero taps are set to 1, which is similar to [1,2,8,13,15,24]. Both the driven input signal and the system independent additive noise are assumed to be white Gaussian distribution with zero mean and variances of 1 and 0.001, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…Similar to l 0 -LMS, l 1 -LMS, l 0 -AP and l 1 -AP algorithms [8,12,13,15,24], the matrix P can assign different value p i to different entries ofŵ(n), and hence, the last two terms of the right hand of Eq. (17) can classify the filter coefficients into small and large groups depending on the absolute value ofŵ i (n) [1,24]. Since we are interested in the minimum value P, the minimization of the last two terms at ith iteration…”
Section: Proposed Sparse Ap Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations
“…In general, a sparse adaptive filtering algorithm can be derived by incorporating a sparsity penalty term (SPT), such as the l0-norm, into a traditional adaptive algorithm. Typical examples of sparse adaptive filtering algorithms include sparse least mean square (LMS) [1][2][3][4], sparse affine projection algorithms (APA) [5], sparse recursive least squares (RLS) [6], and their variations [7][8][9][10][11][12].…”
Section: Introductionmentioning
confidence: 99%