2007
DOI: 10.1214/07-sts242
|View full text |Cite
|
Sign up to set email alerts
|

Boosting Algorithms: Regularization, Prediction and Model Fitting

Abstract: We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as well as regression models for survival analysis. Concepts of degrees of freedom and corresponding Akaike or Bayesian information criteria, particularly useful for regularization and variable selection in high-dimensional covariate spaces, are discussed as well. The practical aspects of boosting procedures for fittin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
907
0
1

Year Published

2011
2011
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 750 publications
(934 citation statements)
references
References 74 publications
2
907
0
1
Order By: Relevance
“…The most prominent example is the Lasso (Tibshirani, 1996) and its extensions to Fused Lasso (Tibshirani et al, 2005) and Group Lasso (Yuan & Lin, 2006). Alternative regularized estimators that enforce variable selection are the Elastic Net (Zou & Hastie, 2005), SCAD (Fan & Li, 2001), the Dantzig selector (Candes & Tao, 2007) and boosting approaches (Bühlmann & Yu, 2003;Bühlmann & Hothorn, 2007;Tutz & Binder, 2006). These methods, however, were developed for models with univariate response.…”
Section: Introductionmentioning
confidence: 99%
“…The most prominent example is the Lasso (Tibshirani, 1996) and its extensions to Fused Lasso (Tibshirani et al, 2005) and Group Lasso (Yuan & Lin, 2006). Alternative regularized estimators that enforce variable selection are the Elastic Net (Zou & Hastie, 2005), SCAD (Fan & Li, 2001), the Dantzig selector (Candes & Tao, 2007) and boosting approaches (Bühlmann & Yu, 2003;Bühlmann & Hothorn, 2007;Tutz & Binder, 2006). These methods, however, were developed for models with univariate response.…”
Section: Introductionmentioning
confidence: 99%
“…Instead, this function is updated at each iteration by adding a function, namely B(t) T (m) (x; c), which is smoothed across t according to the penalty. The approach of applying regularization to the base-learner, but not directly to the model itself is common to boosting algorithms, see for instance Büehlmann and Hothorn (2007). Regularization of the boosting model is applied by selecting an appropriate stopping iteration, m stop .…”
Section: Coefficient Boosting With Multivariate Treesmentioning
confidence: 99%
“…Boosting, which originates from machine learning, has become a popular approach for fitting regression and classification models. Overviews are provided by Hastie et al (2009) and Büehlmann and Hothorn (2007). The main application of boosting has been towards estimating models for predicting or classifying a univariate response.…”
mentioning
confidence: 99%
“…But even in low-dimensional settings the lack of a clear variable selection strategy provides further challenges. In order to deal with these problems, we propose a new inferential scheme for joint models based on gradient boosting (Bühlmann and Hothorn, 2007). Boosting algorithms emerged from the field of machine learning and were originally designed to enhance the performance of weak classifiers (base-learners) with the aim to yield a perfect discrimination of binary outcomes (Freund and Schapire, 1996).…”
Section: Introductionmentioning
confidence: 99%