2022
DOI: 10.1109/tpami.2021.3132674
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Bi-Level Optimization for Learning and Vision From a Unified Perspective: A Survey and Beyond

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
28
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 81 publications
(36 citation statements)
references
References 106 publications
1
28
0
Order By: Relevance
“…Hyper-parameter optimization (Lorraine and Duvenaud 2018;Okuno, Takeda, and Kawana 2018;Franceschi et al 2018) uses bilevel optimization extensively. Besides, the idea of bilevel optimization has also been applied to meta learning (Zintgraf et al 2019;Song et al 2019;Soh, Cho, and Cho 2020), neural architecture search (Liu, Simonyan, and Yang 2018;Wong et al 2018;Xu et al 2019), adversarial learning (Tian et al 2020;Gao et al 2020), deep reinforcement learning (Tschiatschek et al 2019), sparse learning Huang 2019, 2020;Poon and Peyré 2021) etc. For a more thorough review of these applications, please refer to the Table 2 of the survey paper by Liu et al (2021).…”
Section: Related Workmentioning
confidence: 99%
“…Hyper-parameter optimization (Lorraine and Duvenaud 2018;Okuno, Takeda, and Kawana 2018;Franceschi et al 2018) uses bilevel optimization extensively. Besides, the idea of bilevel optimization has also been applied to meta learning (Zintgraf et al 2019;Song et al 2019;Soh, Cho, and Cho 2020), neural architecture search (Liu, Simonyan, and Yang 2018;Wong et al 2018;Xu et al 2019), adversarial learning (Tian et al 2020;Gao et al 2020), deep reinforcement learning (Tschiatschek et al 2019), sparse learning Huang 2019, 2020;Poon and Peyré 2021) etc. For a more thorough review of these applications, please refer to the Table 2 of the survey paper by Liu et al (2021).…”
Section: Related Workmentioning
confidence: 99%
“…Here the UL and LL objectives are smooth functions with Lipschitz continuous first and second order derivatives. 1…”
Section: Bilevel Alternating Gradient With Dual Correctionmentioning
confidence: 99%
“…Now we can uniformly understand the existing explicit and implicit Gradient-based BLOs. From the viewpoint of non-asymptotic convergence analysis, roughly speaking, the existing BLO methods can be divided into three steps: (1) given UL variable, update the LL variable by solving the LL optimization problem up to tolerance, which satisfies prescribed dynamic accuracy requirements. From the view point of the KKT condition, this step is to guarantee the feasible condition; (2) given UL and LL variable, update the dual multipliers by approximating the correct dual multipliers accurately.…”
Section: A Single-level Reformulation and Dual Correctionmentioning
confidence: 99%
See 1 more Smart Citation
“…There is also some research specifically focused on the bi-level optimization problem. See the survey of bi-level optimization algorithms [102] for a full description. The development of bi-level optimization can facilitate the research of robust SSL.…”
Section: Label Distribution Mismatchmentioning
confidence: 99%