2021
DOI: 10.1007/s10898-021-01093-0
|View full text |Cite
|
Sign up to set email alerts
|

Nonconvex and Nonsmooth Sparse Optimization via Adaptively Iterative Reweighted Methods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 38 publications
0
0
0
Order By: Relevance
“…[32]. Extensive computational studies in [28,37,56,60] revealed that the ℓ q regularization admits a significantly stronger sparsity promoting property than the ℓ 1 regularization in the sense that it guarantees to achieve a more sparse solution from a smaller amount of samples. Biological studies in [27,28,46,47] exhibited that the ℓ 0 and ℓ 1 2 regularizations achieve a more reliable solution in biological sense than the ℓ 1 regularization when applied to infer GRN.…”
Section: Regularization Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…[32]. Extensive computational studies in [28,37,56,60] revealed that the ℓ q regularization admits a significantly stronger sparsity promoting property than the ℓ 1 regularization in the sense that it guarantees to achieve a more sparse solution from a smaller amount of samples. Biological studies in [27,28,46,47] exhibited that the ℓ 0 and ℓ 1 2 regularizations achieve a more reliable solution in biological sense than the ℓ 1 regularization when applied to infer GRN.…”
Section: Regularization Methodsmentioning
confidence: 99%
“…However, the lower-order regularization problem (1.3) is nonconvex and nonsmooth, and thus it is very difficult in general to design practical algorithms to approach its global solution. Alternatively, tremendous efforts have been devoted to the development of optimization algorithms for approaching a local minimum or a stationary point of problem (1.3), such as smoothing method [14], PGA [28], iterative reweighted minimization method [56] and difference of convex functions algorithm (DCA) [23]. However, limited by the nonconvexity of the lower-order regularization, the convergence theory for the lower-order regularization problem is still far from satisfactory: only convergence to a stationary point was established in the literature [2,28,29]; while there is still no theoretical evidence to guarantee the convergence to a global minimum or the ground true sparse solution.…”
Section: Regularization Methodsmentioning
confidence: 99%