2012
DOI: 10.1007/s00180-012-0382-5
|View full text |Cite
|
Sign up to set email alerts
|

Model-based boosting in R: a hands-on tutorial using the R package mboost

Abstract: We provide a detailed hands-on tutorial for the R add-on package mboost. The package implements boosting for optimizing general risk functions utilizing component-wise (penalized) least squares estimates as base-learners for fitting various kinds of generalized linear and generalized additive models to potentially high-dimensional data. We give a theoretical background and demonstrate how mboost can be used to fit interpretable models of different complexity.As an example we use mboost to predict the body fat … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
267
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 213 publications
(268 citation statements)
references
References 30 publications
0
267
0
1
Order By: Relevance
“…Higher numbers of m stop hence lead to larger, more complex models while smaller numbers lead to sparser models with less complexity. In practice, m stop is often selected via cross-validation or resampling methods, by selecting the value that leads to the smallest empirical risk on test data (Hofner et al, 2014). For theoretical insights on the general concept of boosting algorithms, we refer to the work of Zhang and Yu (2005) who studied the numerical convergence and consistency with different loss functions.…”
Section: Component-wise Gradient Boostingmentioning
confidence: 99%
See 1 more Smart Citation
“…Higher numbers of m stop hence lead to larger, more complex models while smaller numbers lead to sparser models with less complexity. In practice, m stop is often selected via cross-validation or resampling methods, by selecting the value that leads to the smallest empirical risk on test data (Hofner et al, 2014). For theoretical insights on the general concept of boosting algorithms, we refer to the work of Zhang and Yu (2005) who studied the numerical convergence and consistency with different loss functions.…”
Section: Component-wise Gradient Boostingmentioning
confidence: 99%
“…However, in practice, there exists a quasi-linear relation between the step-length and the needed number of boosting iterations (Schmid and Hothorn, 2008). As a result, it is often recommended to use a fixed small value of ν = 0.1 for the step-length and optimize the stopping iteration instead Hofner et al, 2014). In case of boosting JM, where two additive predictors are fitted and potentially two boosting updates are carried out in each iteration of the algorithms, it is hard to justify why both predictors should be optimal after the same number of boosting iterations (i.e.…”
Section: Boosting Joint Modelsmentioning
confidence: 99%
“…To make this tutorial self-contained, we try to shortly explain all relevant features here as well. However, a dedicated hands-on tutorial is available for an applied introduction to mboost (Hofner, Mayr, Robinzonov, and Schmid 2014).…”
Section: The Package Gamboostlssmentioning
confidence: 99%
“…The choice of base-learners is crucial for the application of the gamboostLSS algorithm, as they define the type(s) of effect(s) that covariates will have on the predictors of the GAMLSS distribution parameters. See Hofner et al (2014) for details and application notes on the base-learners.…”
Section: Base-learnersmentioning
confidence: 99%
“…The one to be updated is chosen by evaluating fits to the gradient, the resulting fits will indicate an element which improves the overall fit the most. And this also lead to the sparseness of the solutions [15], since many coefficients will be estimated to zeros.…”
Section: B Boosting As Functional Gradient Decentmentioning
confidence: 99%