The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2020
DOI: 10.1137/18m1219187
|View full text |Cite
|
Sign up to set email alerts
|

Optimal $k$-Thresholding Algorithms for Sparse Optimization Problems

Abstract: The simulations indicate that the existing hard thresholding technique independent of the residual function may cause a dramatic increase or numerical oscillation of the residual. This inherit drawback of the hard thresholding renders the traditional thresholding algorithms unstable and thus generally inefficient for solving practical sparse optimization problems. How to overcome this weakness and develop a truly efficient thresholding method is a fundamental question in this field. The aim of this paper is to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
117
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 37 publications
(124 citation statements)
references
References 54 publications
0
117
0
Order By: Relevance
“…Compared with SVRGHT, we can see that the results of our SRGSP in the first few iterations are similar to SVRGHT. However, due to lots of gradient updates followed by a hard thresholding, SRGSP can obtain a better solution, as discussed in [ 36 ]. This further verify the advantage of our SRGSP against other methods.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with SVRGHT, we can see that the results of our SRGSP in the first few iterations are similar to SVRGHT. However, due to lots of gradient updates followed by a hard thresholding, SRGSP can obtain a better solution, as discussed in [ 36 ]. This further verify the advantage of our SRGSP against other methods.…”
Section: Resultsmentioning
confidence: 99%
“…In other words, we use the stochastic recursive gradient proposed in [ 35 ], which is suitable for solving non-convex problems, to optimize the non-convex sparse representation problem in this paper. In order to keep the gradient information of current iterate as suggested in [ 36 ], we perform lots of gradient descent steps, followed by a hard thresholding operation. We also construct the most relevant support on which minimization will be efficient.…”
Section: Introductionmentioning
confidence: 99%
“…Clearly, the convexity of f (λ 6 ) guarantees that ( 37) is a convex optimization. Moreover, (36) and the property…”
Section: Relaxation Modelsmentioning
confidence: 99%
“…Due to the constraints (36), the optimal value of the problem ( 37) is finite if it is feasible. By replacing ζ by R n + in (37) , we also obtain a new relaxation of ( 22):…”
Section: One-step Dual-density-based Algorithmmentioning
confidence: 99%
See 1 more Smart Citation