Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403070
|View full text |Cite
|
Sign up to set email alerts
|

A Block Decomposition Algorithm for Sparse Optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(16 citation statements)
references
References 37 publications
0
16
0
Order By: Relevance
“…Its convergence and worst-case complexity are well investigated for different coordinate selection rules such as cyclic rule (Beck and Tetruashvili 2013), greedy rule (Hsieh and Dhillon 2011), and random rule (Lu and Xiao 2015;Richtárik and Takávc 2014). It has been extended to solve many nonconvex problems such as penalized regression (Breheny and Huang 2011;Deng and Lan 2020), eigenvalue complementarity problem (Patrascu and Necoara 2015), 0 norm minimization (Beck and Eldar 2013;Yuan, Shen, and Zheng 2020), resource allocation problem (Necoara 2013), leading eigenvector computation (Li, Lu, and Wang 2019), and sparse phase retrieval (Shechtman, Beck, and Eldar 2014).…”
Section: Introductionmentioning
confidence: 99%
“…Its convergence and worst-case complexity are well investigated for different coordinate selection rules such as cyclic rule (Beck and Tetruashvili 2013), greedy rule (Hsieh and Dhillon 2011), and random rule (Lu and Xiao 2015;Richtárik and Takávc 2014). It has been extended to solve many nonconvex problems such as penalized regression (Breheny and Huang 2011;Deng and Lan 2020), eigenvalue complementarity problem (Patrascu and Necoara 2015), 0 norm minimization (Beck and Eldar 2013;Yuan, Shen, and Zheng 2020), resource allocation problem (Necoara 2013), leading eigenvector computation (Li, Lu, and Wang 2019), and sparse phase retrieval (Shechtman, Beck, and Eldar 2014).…”
Section: Introductionmentioning
confidence: 99%
“…It is proven that this condition is stronger than the optimality criterion based on IHT. The work of [26,25] proposes and analyzes a new block coordinate optimality condition for general sparse optimization. The block coordinate optimality condition is stronger than the coordinate-wise optimality condition since it includes the latter as a special case.…”
Section: Introductionmentioning
confidence: 99%
“…(i) They fail to solve general nonsmooth sparsity constrained problems. The block decomposition methods [26,25] can only be applied to solve smooth optimization problems. Both the subgradient IHT method and dual IHT method can only solve problems where the objective function is Lipschitz continuous but fail in solving general non-Lipschitz problems as the penalty method can do.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For the general form of Problem (1), matching pursuit methods encounter the same issue as GraSP. Later, coordinate-wise algorithms and block decomposition algorithms also were developed (Patrascu and Necoara 2015;Yuan, Shen, and Zheng 2019). However, they may either cycle indefinitely, if the minimization step has multiple solutions, or need to solve a subproblem globally using combinatorial search methods at each iteration, which may fail for very large sparsity k. Hence, iterative gradient-based HT methods have gained significant interest and become popular for nonconvex sparse learning.…”
Section: Introductionmentioning
confidence: 99%