2021
DOI: 10.1137/20m1357500
|View full text |Cite
|
Sign up to set email alerts
|

Choose Your Path Wisely: Gradient Descent in a Bregman Distance Framework

Abstract: We propose an extension of a special form of gradient descent-in the literature known as linearized Bregman iteration-to a larger class of nonconvex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalized Bregman distance, based on a proper, convex, and lower semicontinuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka-Lojasiewicz property. Examples illustrate that features of different scale are being intr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 68 publications
0
12
0
Order By: Relevance
“…Although a group of numerical tests were reported in [4] to demonstrate that the LBreI in nonconvex optimization still leads to superior performance than that of the regularized problems (1.2), the current theory is far from satisfying. On one hand, as partially mentioned in section 4.2 in [4], the required gradient Lipschitz continuity assumption precludes the application of LBreI to many practical problems such as blind deconvolution problems, Poisson inverse problems, and quadratic inverse problems. On the other hand, it is unclear whether similar results to Theorem 1.1 can be established for general convex energy function E. These two aspects contribute the main motivation of this study.…”
Section: Linearized Bregman Iterationsmentioning
confidence: 98%
See 4 more Smart Citations
“…Although a group of numerical tests were reported in [4] to demonstrate that the LBreI in nonconvex optimization still leads to superior performance than that of the regularized problems (1.2), the current theory is far from satisfying. On one hand, as partially mentioned in section 4.2 in [4], the required gradient Lipschitz continuity assumption precludes the application of LBreI to many practical problems such as blind deconvolution problems, Poisson inverse problems, and quadratic inverse problems. On the other hand, it is unclear whether similar results to Theorem 1.1 can be established for general convex energy function E. These two aspects contribute the main motivation of this study.…”
Section: Linearized Bregman Iterationsmentioning
confidence: 98%
“…Very recently, some nonconvex extension of the LBreI, allowing E(x) = E(Ax, b) to be in a general form which has a Lipschitz continuous gradient, was made in [4]. Although a group of numerical tests were reported in [4] to demonstrate that the LBreI in nonconvex optimization still leads to superior performance than that of the regularized problems (1.2), the current theory is far from satisfying.…”
Section: Linearized Bregman Iterationsmentioning
confidence: 99%
See 3 more Smart Citations