Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2004
DOI: 10.1137/s0363012902419977
|View full text |Cite
|
Sign up to set email alerts
|

Hessian Riemannian Gradient Flows in Convex Programming

Abstract: Abstract. In view of solving theoretically constrained minimization problems, we investigate the properties of the gradient flows with respect to Hessian Riemannian metrics induced by Legendre functions. The first result characterizes Hessian Riemannian structures on convex sets as metrics that have a specific integration property with respect to variational inequalities, giving a new motivation for the introduction of Bregman-type distances. Then, the general evolution problem is introduced, and global conver… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
138
0
2

Year Published

2006
2006
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 73 publications
(142 citation statements)
references
References 45 publications
(96 reference statements)
1
138
0
2
Order By: Relevance
“…Equivalently, we can interpret the higher-order gradient algorithm [25] as a discretization of the rescaled gradient flow [27] with time step δ = e 1=ðp−1Þ , so t = δk = e 1=ðp−1Þ k. With this identification, the convergence rates in discrete time, Oð1=ðek p−1 ÞÞ, and in continuous time, Oð1=t p−1 Þ, match. The convergence rate for the continuous-time dynamics does not require any assumption beyond the convexity and differentiability of f (as in the case of the Lagrangian flow [6]), whereas the convergence rate for the discrete-time algorithm requires the higher-order smoothness assumption on f. We note that the limiting case p → ∞ of [27] is the normalized gradient flow, which has been shown to converge to the minimizer of f in finite time (29). We also note that unlike the Lagrangian flow, the family of rescaled gradient flows is not closed under time dilation.…”
Section: Polynomial Convergence Rates and Accelerated Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Equivalently, we can interpret the higher-order gradient algorithm [25] as a discretization of the rescaled gradient flow [27] with time step δ = e 1=ðp−1Þ , so t = δk = e 1=ðp−1Þ k. With this identification, the convergence rates in discrete time, Oð1=ðek p−1 ÞÞ, and in continuous time, Oð1=t p−1 Þ, match. The convergence rate for the continuous-time dynamics does not require any assumption beyond the convexity and differentiability of f (as in the case of the Lagrangian flow [6]), whereas the convergence rate for the discrete-time algorithm requires the higher-order smoothness assumption on f. We note that the limiting case p → ∞ of [27] is the normalized gradient flow, which has been shown to converge to the minimizer of f in finite time (29). We also note that unlike the Lagrangian flow, the family of rescaled gradient flows is not closed under time dilation.…”
Section: Polynomial Convergence Rates and Accelerated Methodsmentioning
confidence: 99%
“…In particular, we show in SI Appendix, H. Further Properties that we can recover natural gradient flow as the strongfriction limit of a Bregman Lagrangian flow with an appropriate choice of parameters. Similarly, we can recover the rescaled gradient flow [27] as the strong-friction limit of a Lagrangian flow that uses the pth power of the norm as the kinetic energy. Therefore, the general family of second-order Lagrangian flows is more general and includes first-order gradient flows in its closure.…”
Section: Further Explorations Of the Bregman Lagrangianmentioning
confidence: 99%
“…Some convergence results have been recently obtained for some research works: Attouch and Teboulle (2004), with a regularized Lotka-Volterra dynamical system, have proved the convergence of the continuous method to a point which belongs to certain set which contains the set of optimal points; see also Alvarez, Bolte, and Brahic (2004), that treats a general class of dynamical systems that includes the one of Attouch and Teboulle. Souza et al (2010), Cunha et al (2010), Chen and Pan (2008) and Pan and Chen (2007) studied the iteration…”
Section: Introductionmentioning
confidence: 99%
“…For smooth quasiconvex minimization on the nonnegative orthant, there are some recent works in the literature. Attouch and Teboulle [5], with a regularized Lotka-Volterra dynamical system, have proved the convergence of the continuous method to a point which belongs to a certain set which contains the set of optimal points; see also Alvarez et al [2], that treats a general class of dynamical systems that includes the one of Attouch and Teboulle [5], and includes also the case of quasiconvex objective functions in connection with continuous in time models of generalized proximal point algorithms. Cunha et al [12] and Chen and Pan [11], with a particular φ-divergence distance, have proved the full convergence of the proximal method to the KKT-point of the problem when parameter λ k is bounded and convergence to an optimal solution when λ k → 0.…”
Section: Introductionmentioning
confidence: 99%