2010
DOI: 10.1007/978-3-642-16108-7_9
|View full text |Cite
|
Sign up to set email alerts
|

Approximation Stability and Boosting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
2
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 14 publications
1
2
0
Order By: Relevance
“…We use algorithmic stability to show that our collaborative approaches ensure good generalization. By extending findings in refs 48 , 49 , 50 , 51 , SI provides proofs that our collaborative strategies give lower and tighter upper bounds of the generalization error. Algorithmic stability has been utilized as reversed engineering: to guide which instances to exchange in order to improve the overall stability and reduce generalization error.…”
Section: Resultssupporting
confidence: 70%
“…We use algorithmic stability to show that our collaborative approaches ensure good generalization. By extending findings in refs 48 , 49 , 50 , 51 , SI provides proofs that our collaborative strategies give lower and tighter upper bounds of the generalization error. Algorithmic stability has been utilized as reversed engineering: to guide which instances to exchange in order to improve the overall stability and reduce generalization error.…”
Section: Resultssupporting
confidence: 70%
“…Gao and Zhou (2010) introduce the notion of approximation stability, and prove that the approximation stability is sufficient for generalization and necessary for learnability of asymptotical empirical risk minimization (AERM). Then, Gao and Zhou (2010) prove that AdaBoost has approximation stability and thus good generalization. Moreover, an exponential bound for AdaBoost is provided.…”
Section: Boostingmentioning
confidence: 99%
“…One such view is due to Friedman et al (2000) who propose that boosting is stagewise additive model fitting. Gao and Zhou (2010) introduce the notion of approximation stability, and prove that the approximation stability is sufficient for generalization and necessary for learnability of asymptotical empirical risk minimization (AERM). Then, Gao and Zhou (2010) prove that AdaBoost has approximation stability and thus good generalization.…”
Section: Boostingmentioning
confidence: 99%