Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/476
|View full text |Cite
|
Sign up to set email alerts
|

Gradient Boosting with Piece-Wise Linear Regression Trees

Abstract: Gradient Boosted Decision Trees (GBDT) is a very successful ensemble learning algorithm widely used across a variety of applications. Recently, several variants of GBDT training algorithms and implementations have been designed and heavily optimized in some very popular open sourced toolkits including XGBoost, LightGBM and CatBoost. In this paper, we show that both the accuracy and efficiency of GBDT can be further enhanced by using more complex base learners. Specifically, we extend gradient boosting to use … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(15 citation statements)
references
References 1 publication
0
13
0
Order By: Relevance
“…For BERT-m7G, we explore six different classifiers in the first place to choose the base classifiers of the stacking ensemble classifier algorithm, including light gradient boosting machine (LightGBM) [ 31 ], support vector machine (SVM) [ 32 ], random forest (RF) [ 33 ], naive Bayes (NB) classifier [ 34 , 35 ], logistic regression (LR) [ 36 , 37 ], and gradient boosting decision tree (GBDT) [ 38 , 39 ]. Subsequently, LightGBM, SVM, and LR are selected as optimum combination of the base classifier because when the optimal feature subset is used as the input features of classifiers, their prediction accuracy values are higher than those of other machine learning classifiers.…”
Section: Methodsmentioning
confidence: 99%
“…For BERT-m7G, we explore six different classifiers in the first place to choose the base classifiers of the stacking ensemble classifier algorithm, including light gradient boosting machine (LightGBM) [ 31 ], support vector machine (SVM) [ 32 ], random forest (RF) [ 33 ], naive Bayes (NB) classifier [ 34 , 35 ], logistic regression (LR) [ 36 , 37 ], and gradient boosting decision tree (GBDT) [ 38 , 39 ]. Subsequently, LightGBM, SVM, and LR are selected as optimum combination of the base classifier because when the optimal feature subset is used as the input features of classifiers, their prediction accuracy values are higher than those of other machine learning classifiers.…”
Section: Methodsmentioning
confidence: 99%
“…classifiers, e.g., k-nearest neighbor (KNN) [61], support vector machines (SVM) [62], random forest (RF) [63], gradient boosting decision tree (GBDT) [64], Naï ve Bayes classifier (NB) [65], logistic regression (LR) [66], light gradient boosting machine (LightGBM) [67], extreme gradient boosting (XGBoost) [54], and adaptive boosting (AdaBoost) [68]. Step…”
Section: Stacked Ensemble Classifiermentioning
confidence: 99%
“…The best split will be found after sweeping over all features and thresholds. While this method is widely adopted [43], [44], it has prohibitively high computational complexity because we need to solve a large number of LS problems.…”
Section: B Binary-split Alternate Minimization Algorithmmentioning
confidence: 99%