Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/201
|View full text |Cite
|
Sign up to set email alerts
|

Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization

Abstract: Various types of parameter restart schemes have been proposed for proximal gradient algorithm with momentum to facilitate their convergence in convex optimization. However, under parameter restart, the convergence of proximal gradient algorithm with momentum remains obscure in nonconvex optimization. In this paper, we propose a novel proximal gradient algorithm with momentum and parameter restart for solving nonconvex and nonsmooth problems. Our algorithm is designed to 1) allow for adopting flexibl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…To provide a neat version of thesis with closely correlated topics, this thesis does not include all of the author's works. We briefly talk about some representatives of the author's other research works [56,64,59,123,61,60,125,101,116,47,130,129,133] as follows.…”
Section: Other Phd Researchmentioning
confidence: 99%
“…To provide a neat version of thesis with closely correlated topics, this thesis does not include all of the author's works. We briefly talk about some representatives of the author's other research works [56,64,59,123,61,60,125,101,116,47,130,129,133] as follows.…”
Section: Other Phd Researchmentioning
confidence: 99%
“…Moreover, it also generalizes other global geometries such as strong convexity and PŁ geometry. In the existing literature, the KŁ geometry has been exploited extensively to analyze the convergence rate of various gradient-based algorithms in nonconvex optimization, e.g., gradient descent Attouch and Bolte (2009); Li et al (2017) and its accelerated version Zhou et al (2020) as well as the distributed version Zhou et al (2016a). Hence, we are highly motivated to study the convergence rate of variable convergence of GDA in nonconvex minimax optimization under the KŁ geometry.…”
Section: Introductionmentioning
confidence: 99%