2007
DOI: 10.1007/s10994-007-5014-x
|View full text |Cite
|
Sign up to set email alerts
|

A primal-dual perspective of online learning algorithms

Abstract: We describe a novel framework for the design and analysis of online learning algorithms based on the notion of duality in constrained optimization. We cast a sub-family of universal online bounds as an optimization problem. Using the weak duality theorem we reduce the process of online learning to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress for analyzing online learning algorithms. We are thus able to ti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0

Year Published

2008
2008
2024
2024

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 141 publications
(59 citation statements)
references
References 21 publications
0
59
0
Order By: Relevance
“…FTRL dates back to the potential-based forecaster in [2, Chapter 11] and its theory was developed in [28]. The name Follow The Regularized Leader comes from [16].…”
Section: Previous Resultsmentioning
confidence: 99%
“…FTRL dates back to the potential-based forecaster in [2, Chapter 11] and its theory was developed in [28]. The name Follow The Regularized Leader comes from [16].…”
Section: Previous Resultsmentioning
confidence: 99%
“…A somewhat similar concept was re-discovered in online learning by Herbster and Warmuth [20], Grove, Littlestone, and Schuurmans [15], Kivinen and Warmuth [25] under the name of potential-based gradient descent, see [10,Chapter 11]. Recently, these ideas have been flourishing, see for instance ShalevSchwartz [33], Rakhlin [30], Hazan [17], and Bubeck [7]. Our main theorem (Theorem 2) allows one to recover almost all known regret bounds for online combinatorial optimization.…”
Section: Contribution and Contents Of The Papermentioning
confidence: 88%
“…Indeed, Weighted Majority is an example of broader class of algorithms collectively known as Follow the Regularized Leader (FTRL) algorithms [34,35,58]. The FTRL template can be applied to a wide class of learning problems that fall under a general framework commonly known as online convex optimization [62].…”
Section: Online Learning and Regret-minimizing Algorithmsmentioning
confidence: 99%