2021
DOI: 10.48550/arxiv.2105.05210
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accelerated Forward-Backward Optimization using Deep Learning

Abstract: We propose several deep-learning accelerated optimization solvers with convergence guarantees. We use ideas from the analysis of accelerated forward-backward schemes like FISTA, but instead of the classical approach of proving convergence for a choice of parameters, such as a step-size, we show convergence whenever the update is chosen in a specific set. Rather than picking a point in this set using some predefined method, we train a deep neural network to pick the best update. Finally, we show that the method… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 25 publications
1
4
0
Order By: Relevance
“…Their resulting fixed points satisfy general notions of equilibrium (similar to the one arising in neural network architectures/paradigms, plug-and-play methods [21], [43], [34]) instead of being the solution to a minimization problem. In this aspect, our work is in line with the recent efforts to design [49], [48] and learn [3], [10], [43], [12] more expressive variants of well-known optimization schemes while keeping convergence guarantees.…”
Section: Introductionsupporting
confidence: 55%
“…Their resulting fixed points satisfy general notions of equilibrium (similar to the one arising in neural network architectures/paradigms, plug-and-play methods [21], [43], [34]) instead of being the solution to a minimization problem. In this aspect, our work is in line with the recent efforts to design [49], [48] and learn [3], [10], [43], [12] more expressive variants of well-known optimization schemes while keeping convergence guarantees.…”
Section: Introductionsupporting
confidence: 55%
“…Safeguarding is a common technique to ensure global convergence in optimization algorithms, for instance the Wolfe conditions in line-search [25,Chapter 3] ensure a sufficient decrease in the objective function value, and trust-region methods [25,Chapter 4] are based on a quadratic model having sufficient accuracy within a given radius. Recently, a norm condition similar to (4) has been combined with a deep-learning approach to speed up the convergence [4]. Even for monotone operators, line-search strategies with safeguarding have been developed, see [36,Eq.…”
Section: Introductionmentioning
confidence: 99%
“…It uses two deviation vectors and a slightly more involved safeguard condition. A similar algorithm with deviation vectors has been proposed in [4] to extend the proximal gradient method for convex minimization. The fact that we consider the more general monotone inclusion setting, allows us to apply our results, e.g., to the Chambolle-Pock [7] and Condat-Vũ [13,38] methods that both are preconditioned FB methods [20].…”
Section: Introductionmentioning
confidence: 99%
“…• Banert et al (2020) consider theoretical foundations for data-driven nonsmooth optimization and show applications in deblurring and solving inverse problems for computed tomography and is further developed in Banert et al (2021).…”
Section: Related Workmentioning
confidence: 99%