2013
DOI: 10.3389/fnbot.2013.00021
|View full text |Cite
|
Sign up to set email alerts
|

Gradient boosting machines, a tutorial

Abstract: Gradient boosting machines are a family of powerful machine-learning techniques that have shown considerable success in a wide range of practical applications. They are highly customizable to the particular needs of the application, like being learned with respect to different loss functions. This article gives a tutorial introduction into the methodology of gradient boosting methods with a strong focus on machine learning aspects of modeling. A theoretical information is complemented with descriptive examples… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
1,100
0
11

Year Published

2014
2014
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 2,081 publications
(1,282 citation statements)
references
References 51 publications
1
1,100
0
11
Order By: Relevance
“…In gradient boosting modeling, more accurate estimate of the response variable is obtained through consecutively fitting new models in order to reduce the variance between the predicted and observed responses. The main idea of GBM is to learn the data to achieve maximum correlation with the negative gradient of the loss function (Natekin and Knoll 2013). For continuous response, the loss function can be Gaussian or Laplace functions.…”
Section: Generalized Boosted Regression Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In gradient boosting modeling, more accurate estimate of the response variable is obtained through consecutively fitting new models in order to reduce the variance between the predicted and observed responses. The main idea of GBM is to learn the data to achieve maximum correlation with the negative gradient of the loss function (Natekin and Knoll 2013). For continuous response, the loss function can be Gaussian or Laplace functions.…”
Section: Generalized Boosted Regression Modelmentioning
confidence: 99%
“…For continuous response variable, the appropriate loss function is the squared-error L2 loss and its derivative represents the residual. So, the GBM can be applied based on the residual fitting (Natekin and Knoll 2013). The GBM procedure begins with assigning a differentiable loss function, which represents the variance between observed and predicted response factor y and starts with an initial model F, which can be averaging of y.…”
Section: Generalized Boosted Regression Modelmentioning
confidence: 99%
“…This result is achieved by making the new decision tree maximally correlated with the negative gradient of the loss function. A detailed review of GB methods is given in Natekin & Knoll (2013) …”
Section: Gradient Tree Boostingmentioning
confidence: 99%
“…This technique is flexible enough to be used with different families of loss functions, and this choice is often influenced by the desired characteristics of the conditional distribution, such as robustness to outliers. Any arbitrary loss function can be plugged into the framework by specifying the loss function and the function to compute its negative gradient [17]. The squared 2 loss and the Laplacian…”
Section: Gradient Boosted Regression Treesmentioning
confidence: 99%