2017
DOI: 10.1016/j.eswa.2017.02.017
|View full text |Cite
|
Sign up to set email alerts
|

A boosted decision tree approach using Bayesian hyper-parameter optimization for credit scoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
267
0
4

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 589 publications
(307 citation statements)
references
References 49 publications
6
267
0
4
Order By: Relevance
“…It is an advanced implementation of the gradient boosting (GB) algorithm and uses the decision tree as the base classifier. After carefully reading the research in [12] and [20], the algorithm of GB and XGBoost is briefly summarized as follows. Suppose we have a dataset D = { ; } containing n observations, where and denotes the features and the target variable, respectively.…”
Section: Extreme Gradient Boosting Along With Hyper-parametersmentioning
confidence: 99%
See 1 more Smart Citation
“…It is an advanced implementation of the gradient boosting (GB) algorithm and uses the decision tree as the base classifier. After carefully reading the research in [12] and [20], the algorithm of GB and XGBoost is briefly summarized as follows. Suppose we have a dataset D = { ; } containing n observations, where and denotes the features and the target variable, respectively.…”
Section: Extreme Gradient Boosting Along With Hyper-parametersmentioning
confidence: 99%
“…It is a novel while advanced variant of the gradient boosting algorithm and has obtained promising results in many Kaggle machine learning competitions [10]. Furthermore, XGBoost has been successfully applied in bankruptcy prediction and credit scoring in a few studies [11] [12].…”
Section: Introductionmentioning
confidence: 99%
“…(2016)), neural networks (see West (2000)), gradient boosting (GBT, see Xia et al (2017)) and logistic regression. We use the out of sample loss as our main comparison metric, with lower loss values corresponding to better model performance.…”
Section: Model Comparisonsmentioning
confidence: 99%
“…Fahmi et al [55] and West and Bhattacharya [56] both used the same dataset, which is the one used in "UCSD- [64] proposed a sequential ensemble credit scoring model based on XGBoost, a variant of gradient boosting machine, and the hyperparameters of XGBoost are adaptively tuned by the tree-structured Parzen estimator, grid search, random search and manual search. A sequential ensemble learning combines a series of weak base learners that process different hypothesizes sequentially to form a better hypothesis, thus making good predictions [65] [66] [67].…”
Section: Credit Card Fraud Detectionmentioning
confidence: 99%