2020
DOI: 10.1109/access.2020.3026058
|View full text |Cite
|
Sign up to set email alerts
|

Block-Sparsity Log-Sum-Induced Adaptive Filter for Cluster Sparse System Identification

Abstract: In this work, an effective adaptive block-sparsity log-sum least mean square (BSLS-LMS) algorithm is proposed to improve the convergence performance of cluster sparse system identification. The main idea of the proposed scheme is to add a new block-sparsity induced term into the cost function of the LMS algorithm. We utilize the 1 norm of adaptive tap weights and log-sum as a mixed constraint, via optimizing the cost function through the gradient descent method, the proposed adaptive filtering method can itera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…In addition, several techniques have been proposed in BS systems identification to minimize convergence time and improve algorithm resilience. For such applications, stochastic gradient descent techniques have been proposed [32].…”
Section: Ebsaba Algorithmmentioning
confidence: 99%
“…In addition, several techniques have been proposed in BS systems identification to minimize convergence time and improve algorithm resilience. For such applications, stochastic gradient descent techniques have been proposed [32].…”
Section: Ebsaba Algorithmmentioning
confidence: 99%
“…Throughout this paper, all the problems are discussed based on the assumption that the columns of Φ are standardized to have unit ℓ 2 norm. Since the optimization ( 1) is a NP-hard problem that has computational complexity growing exponentially with the signal dimension, many improved efficient methods have been proposed recently, such as, orthogonal matching pursuit (OMP) [4], the log-sum (q= 1, 2) minimization [5][6][7][8][9] min…”
Section: Introductionmentioning
confidence: 99%
“…Meanwhile, the work proposed insight on the advantages and drawbacks of ℓ 1 relaxation methods such as BPDN and the Dantzig selector, as opposed to greedy approaches such as OMP and thresholding. In [27] and [7], Shen and Fang respectively proved the existence of global minimum for the log-sum minimization without noise as q equal to 1 and 2. However, Shen considered the RIP condition, where the constant δ 3k > 0 is harsh and unsolvable.…”
Section: Introductionmentioning
confidence: 99%