2003
DOI: 10.1016/s0167-6377(02)00231-6
|View full text |Cite
|
Sign up to set email alerts
|

Mirror descent and nonlinear projected subgradient methods for convex optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
895
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 872 publications
(900 citation statements)
references
References 10 publications
5
895
0
Order By: Relevance
“…Time-varying regularizers were analyzed in [24] and the analysis tightened in [27]. MD was originally proposed in [18] and later analyzed in [30] for convex optimization. In the online learning literature it makes its first appearance, with a different name, in [15].…”
Section: Previous Resultsmentioning
confidence: 99%
“…Time-varying regularizers were analyzed in [24] and the analysis tightened in [27]. MD was originally proposed in [18] and later analyzed in [30] for convex optimization. In the online learning literature it makes its first appearance, with a different name, in [15].…”
Section: Previous Resultsmentioning
confidence: 99%
“…In these papers the authors adopted the presentation suggested by Beck and Teboulle [6], which corresponds to a Follow-the-Regularized-Leader (FTRL) type strategy. There the focus was on F being strongly convex with respect to some norm.…”
Section: Figurementioning
confidence: 99%
“…In particular, online convex programming methods [4,7,14,23] rely on the gradient of the instantaneous loss of a predictor to update the prediction for the next data point. The aim of these methods is to ensure that the per-round performance approaches that of the best offline method with access to the entire data sequence.…”
Section: Relationship Between Program Outcomes and Previous State-of-mentioning
confidence: 99%