2004
DOI: 10.1109/tit.2004.833339
|View full text |Cite
|
Sign up to set email alerts
|

On the Generalization Ability of On-Line Learning Algorithms

Abstract: In this paper, it is shown how to extract a hypothesis with small risk from the ensemble of hypotheses generated by an arbitrary on-line learning algorithm run on an independent and identically distributed (i.i.d.) sample of data. Using a simple large deviation argument, we prove tight data-dependent bounds for the risk of this hypothesis in terms of an easily computable statistic associated with the on-line performance of the ensemble. Via sharp pointwise bounds on , we then obtain risk tail bounds for kernel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
398
0

Year Published

2005
2005
2022
2022

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 359 publications
(402 citation statements)
references
References 38 publications
4
398
0
Order By: Relevance
“…In this section, we briefly introduce the related work on online machine learning (Rosenblatt 1958;Crammer and Singer 2003;Cesa-Bianchi et al 2004;Crammer et al 2006;Fink et al 2006) to have the learning inspiration for our work. Perceptron algorithm (Rosenblatt 1958;Freund and Schapire 1999) is one important online approach which updates the learning function by adding a new example with a constant weight when it is misclassified.…”
Section: Online Learningmentioning
confidence: 99%
“…In this section, we briefly introduce the related work on online machine learning (Rosenblatt 1958;Crammer and Singer 2003;Cesa-Bianchi et al 2004;Crammer et al 2006;Fink et al 2006) to have the learning inspiration for our work. Perceptron algorithm (Rosenblatt 1958;Freund and Schapire 1999) is one important online approach which updates the learning function by adding a new example with a constant weight when it is misclassified.…”
Section: Online Learningmentioning
confidence: 99%
“…This brings us to our second algorithm: We define an online learning problem that is closely related to the original statistical learning problem. We address this online problem with a modified version of the online Perceptron algorithm (Rosenblatt 1958), and then convert the online algorithm into a statistical learning algorithm using an online-to-batch conversion technique (Cesa-Bianchi et al 2004). This approach benefits from the computational efficiency of the Perceptron, and from the generalization properties and theoretical guarantees provided by the online-to-batch technique.…”
Section: Introductionmentioning
confidence: 99%
“…This renewed interest for theory naturally boosted the development of performance bounds for learning machines (see e.g. Bartlett and Long 1998;Bousquet 2003;Cesa-Bianchi et al 2004;Cucker and Zhou 2007;Lugosi and Pawlak 1994;Zhou 2003, 2004;Wu and Zhou 2005;Zhou 2003 and references therein). In order to measure the generalization ability of the empirical risk minimization algorithm with i.i.d.…”
Section: Introductionmentioning
confidence: 99%