2019
DOI: 10.3390/a12020036
|View full text |Cite
|
Sign up to set email alerts
|

Conjugate Gradient Hard Thresholding Pursuit Algorithm for Sparse Signal Recovery

Abstract: We propose a new iterative greedy algorithm to reconstruct sparse signals in Compressed Sensing. The algorithm, called Conjugate Gradient Hard Thresholding Pursuit (CGHTP), is a simple combination of Hard Thresholding Pursuit (HTP) and Conjugate Gradient Iterative Hard Thresholding (CGIHT). The conjugate gradient method with a fast asymptotic convergence rate is integrated into the HTP scheme that only uses simple line search, which accelerates the convergence of the iterative process. Moreover, an adaptive st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 34 publications
0
3
0
Order By: Relevance
“…Overall, the proposed 2-step Q-PJOMP outperforms the other schemes, which is due to several reasons: firstly, Q-PJOMP is more efficient than the IHT based algorithm (Q-PJIHT). Secondly, since the choice of η is critical in IHT [45], a flexible gradient step size η is employed in the Q-PJIHT algorithm. The η is used as per sparsity level requirements in each iteration, contrary to a fixed-size of 0.01 used in [16].…”
Section: B Snr Degradationmentioning
confidence: 99%
See 2 more Smart Citations
“…Overall, the proposed 2-step Q-PJOMP outperforms the other schemes, which is due to several reasons: firstly, Q-PJOMP is more efficient than the IHT based algorithm (Q-PJIHT). Secondly, since the choice of η is critical in IHT [45], a flexible gradient step size η is employed in the Q-PJIHT algorithm. The η is used as per sparsity level requirements in each iteration, contrary to a fixed-size of 0.01 used in [16].…”
Section: B Snr Degradationmentioning
confidence: 99%
“…The proposed work also selects the optimal step size (η) depending on the sparsity level and dictionary length M . Indeed, since IHT is a gradient descent based algorithm, its step size must be chosen optimally for better convergence [45].…”
Section: (Ab): Step 3 -Channel Estimatementioning
confidence: 99%
See 1 more Smart Citation