2019
DOI: 10.1007/s10107-019-01365-4
|View full text |Cite
|
Sign up to set email alerts
|

Perturbed proximal primal–dual algorithm for nonconvex nonsmooth optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
64
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 61 publications
(67 citation statements)
references
References 52 publications
2
64
0
1
Order By: Relevance
“…Recently, a proximal algorithm (PG-EXTRA) is proposed in [3], which uses constant stepsize and achieves o(1/R) rate for nondifferentiable but convex optimization. A recent research attention for solving problem (1) in [4] has been devoted to the so-called perturbed proximal primal-dual algorithm (PProx-PDA) in which a primal gradient descent step is performed followed by an approximate dual gradient ascent step. Our algorithm is closely related to PProx-PDA, which achieves a sublinear convergence to only an εstationary point.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, a proximal algorithm (PG-EXTRA) is proposed in [3], which uses constant stepsize and achieves o(1/R) rate for nondifferentiable but convex optimization. A recent research attention for solving problem (1) in [4] has been devoted to the so-called perturbed proximal primal-dual algorithm (PProx-PDA) in which a primal gradient descent step is performed followed by an approximate dual gradient ascent step. Our algorithm is closely related to PProx-PDA, which achieves a sublinear convergence to only an εstationary point.…”
Section: A Related Workmentioning
confidence: 99%
“…In fact, matrix B is often used to eliminate the nonconvexity in the augmented Lagrangian, in order for the obtained subproblem to be strongly convex, or even to provide a closed-form solution through choosing matrix B with A T A + B T B I N . Although parameters ρ, γ and β are fixed for all r, it could be shown that adapting the parameters can accelerate the convergence of the algorithm [4]. Finally, we note that step 2 in Algorithm 1 is decomposable over the variables, therefore they are wellsituated to be implemented in a distributed manner.…”
Section: Algorithm Developmentmentioning
confidence: 99%
“…(i) The paper utilizes a regularized primal-dual method, where the regularization comes in the form of a strongly concave term in the dual vector variable that is added to the Lagrangian function [26]; see also [35] and [24,21]. The strongly concave regularization term plays a critical role in establishing Q-linear convergence of the proposed algorithm.…”
mentioning
confidence: 99%
“…The strongly concave regularization term plays a critical role in establishing Q-linear convergence of the proposed algorithm. However, as an artifact of this regularization, existing works for time-invariant convex programs [26], time-varying convex programs [6], and for static nonconvex problems [21] could prove that gradient-based iterative methods approach an approximate KKT point [2]. On the other hand, the present paper provides analytical results in terms of tracking a KKT point (as opposed to an approximate KKT point) of the nonconvex problem (1.3) and provides bounds for the distance of the algorithmic iterates from a KKT trajectory.…”
mentioning
confidence: 99%
See 1 more Smart Citation