2023
DOI: 10.48550/arxiv.2301.02060
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A first-order augmented Lagrangian method for constrained minimax optimization

Abstract: In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a firstorder method recently developed in [26] by the authors. Under some suitable assumptions, an operation complexity of O(ε −4 log ε −1 ), measured by its fundamental operations, is established for the first-order augmented Lagrangian method for finding an ε-K… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…Subsequently, Luo et al [2020] presented a class of efficient stochastic recursive GDA to solve the stochastic Non-Convex Strongly-Concave (NC-SC) minimax problems. More recently, Lu and Mei [2023] proposed a first-order augmented Lagrangian method to solve the constrained nonconvex-concave minimax problems with nonsmooth regularization. Another class of approaches is the alternating (two-timescale) GDA, which only uses a single-loop to update primal and dual variables x and y with different learning rates.…”
Section: Algorithmmentioning
confidence: 99%
“…Subsequently, Luo et al [2020] presented a class of efficient stochastic recursive GDA to solve the stochastic Non-Convex Strongly-Concave (NC-SC) minimax problems. More recently, Lu and Mei [2023] proposed a first-order augmented Lagrangian method to solve the constrained nonconvex-concave minimax problems with nonsmooth regularization. Another class of approaches is the alternating (two-timescale) GDA, which only uses a single-loop to update primal and dual variables x and y with different learning rates.…”
Section: Algorithmmentioning
confidence: 99%