2022
DOI: 10.48550/arxiv.2204.00647
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Conditions for linear convergence of the gradient method for non-convex optimization

Abstract: In this paper, we derive a new linear convergence rate for the gradient method with fixed step lengths for non-convex smooth optimization problems satisfying the Polyak-Lojasiewicz (P L) inequality. We establish that the P L inequality is a necessary and sufficient condition for linear convergence to the optimal value for this class of problems. We list some related classes of functions for which the gradient method may enjoy linear convergence rate. Moreover, we investigate their relationship with the P L ine… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Recently the authors showed that the P L inequality is necessary and sufficient conditions for the linear convergence of the gradient method with constant step lengths for L-smooth function; see [2,Theorem 5]. In what follows, we establish that the P L inequality is a necessary condition for the linear convergence of ADMM.…”
Section: Definitionmentioning
confidence: 65%
See 2 more Smart Citations
“…Recently the authors showed that the P L inequality is necessary and sufficient conditions for the linear convergence of the gradient method with constant step lengths for L-smooth function; see [2,Theorem 5]. In what follows, we establish that the P L inequality is a necessary condition for the linear convergence of ADMM.…”
Section: Definitionmentioning
confidence: 65%
“…Deng et al [7] show that the sequence {(x k , z k , λ k )} is convergent linearly to a saddle point under the two scenarios given in Table 1. It is worth mentioning that Scenario 1 or Scenario 2 implies strong convexity of the dual objective function and therefore the P L inequality is resulted, see [2]. Hence, Theorem 6 implies the linear convergence in terms of dual value under Scenario 1 or Scenario 2.…”
Section: Definitionmentioning
confidence: 89%
See 1 more Smart Citation
“…We verify that x 1 = − 1 4 M γ and that the sequence (x t ) t cycles back to 3 4 M γ. Therefore, the sequence itself does not converge and the sequence of the PR averaged iterates converges to 1 4 M γ, while the optimum value would be 0.…”
Section: Organisation Of the Appendixmentioning
confidence: 99%
“…Therefore, many authors studied conditions in between convexity and strong convexity, aiming to obtain faster rates under relatively generic assumptions. In particular, different authors considered the restricted secant inequality [55,22], the error bound [35], Łojasiewicz-type inequalities [44], and many more [25,29,33,19,36,24,1].…”
Section: Introductionmentioning
confidence: 99%