2019
DOI: 10.48550/arxiv.1912.00137
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Proximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New Twists

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(25 citation statements)
references
References 0 publications
0
25
0
Order By: Relevance
“…For large-scale convex optimization problems like (1), primal-dual splitting algorithms [9,15,16,24,29,34,41] are well suited, as they are easy to implement and typically show state-of-the-art performance. The fully split algorithms do not require the ability to project onto the constraint space {x ∈ X : Kx = b}.…”
Section: Introductionmentioning
confidence: 99%
“…For large-scale convex optimization problems like (1), primal-dual splitting algorithms [9,15,16,24,29,34,41] are well suited, as they are easy to implement and typically show state-of-the-art performance. The fully split algorithms do not require the ability to project onto the constraint space {x ∈ X : Kx = b}.…”
Section: Introductionmentioning
confidence: 99%
“…However, we can still obtain a linear convergence rate by carefully utilizing the L-smoothness property of the function F (x). Similar issue appeared in the design of algorithms for solving linearly-constrained minimizations problems, including the non-accelerated algorithms (Condat et al, 2019;Salim et al, 2020) and the optimal algorithm (Salim et al, 2021). However, the authors of these works considered a different problem reformulation from (11), and hence their results can not be applied here.…”
Section: Primal Algorithm Design and Convergencementioning
confidence: 99%
“…To solve the problem ( 21), a well suited algorithm is the Proximal Method of Multipliers [33], [34], which, initialized with some variables U (0) ∈ (C 3×3 ) |E| and s (0) ∈ C d , consists in the iteration: for i = 0, 1, . .…”
Section: Proposed Algorithmmentioning
confidence: 99%
“…where L * denotes the adjoint operator of L, f * denotes the convex conjugate of f [35], τ > 0 is a parameter, and we set σ = 1/( L 2 τ ), where the squared operator norm L 2 is twice the maximum number of edges per node. With this choice, the variable s (i) in the algorithm converges to a solution s ⋆ of (21) [34,Theorem 4.3]. In the algorithm, the proximity operator prox σf * maps each matrix Q n,n ′ , for (n, n ′ ) ∈ E, to the projection of Q n,n ′ + σId onto the cone of Hermitian negative semidefinite matrices; this is achieved by computing the eigendecomposition and setting the positive eigenvalues to zero.…”
Section: Proposed Algorithmmentioning
confidence: 99%
See 1 more Smart Citation