2017
DOI: 10.1007/s11590-017-1168-z
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic smoothness parameter for fast gradient methods

Abstract: We present and computationally evaluate a variant of the fast gradient method by Nesterov that is capable of exploiting information, even if approximate, about the optimal value of the problem. This information is available in some applications, among which the computation of bounds for hard integer programs. We show that dynamically changing the smoothness parameter of the algorithm using this information results in a better convergence profile of the algorithm in practice.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 15 publications
(35 reference statements)
0
2
0
Order By: Relevance
“…As far as the classic approach is concerned, reference books, whose reading is strongly suggested, are Shor (1985), Polyak (1987). In more recent years, the interest in subgradient-type methods was renewed, thanks to the Mirror Descent Algorithm introduced by Nemirowski and Yudin (see also Beck and Teboulle 2003), and to some papers by Nesterov (2005Nesterov ( , 2009a (see also the variant Frangioni et al 2018). Very recent developments are in Dvurechensky et al (2020).…”
Section: Subgradient Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…As far as the classic approach is concerned, reference books, whose reading is strongly suggested, are Shor (1985), Polyak (1987). In more recent years, the interest in subgradient-type methods was renewed, thanks to the Mirror Descent Algorithm introduced by Nemirowski and Yudin (see also Beck and Teboulle 2003), and to some papers by Nesterov (2005Nesterov ( , 2009a (see also the variant Frangioni et al 2018). Very recent developments are in Dvurechensky et al (2020).…”
Section: Subgradient Methodsmentioning
confidence: 99%
“…for some σ > 0. Minimization of the smooth function f μ (x) is then pursued via a gradienttype method (see also Frangioni et al 2018 for a discussion on tuning of the smoothing parameter μ. )…”
Section: Remarkmentioning
confidence: 99%