2024
DOI: 10.1016/j.eswa.2023.121484
|View full text |Cite
|
Sign up to set email alerts
|

Consumer credit risk assessment: A review from the state-of-the-art classification algorithms, data traits, and learning methods

Xiaoming Zhang,
Lean Yu
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 280 publications
0
6
0
Order By: Relevance
“…Classifiers were categorized into three groups: individual classifiers (logistic regression (LR), k-nearest neighbors (KNN), support vector machine (SVM), naïve Bayes (NB), decision tree (DT)), ensemble classifiers (random forest (RF), XGBoost (XGB), LightGBM (LGBM), CatBoost (CAT)), and balanced classifiers (balanced bagging classifier (BBC), bal-anced random forest (BRF)). The criterion for selecting classifiers is based on the summary of [43] by reviewing 281 credit-risk-models-related articles (our experiment did not include deep learning algorithms such as artificial neural network [44] and convolutional neural network, although the summary mentioned them, and the reason for their exclusion is their complexity compared with individual classifiers and the higher risk of overfitting they entail; however, their performance does not surpass that of other classifiers [45]). Subsequently, all training sets were subjected to evaluation using these 11 classifiers.…”
Section: Framework and Evaluation Metricsmentioning
confidence: 99%
“…Classifiers were categorized into three groups: individual classifiers (logistic regression (LR), k-nearest neighbors (KNN), support vector machine (SVM), naïve Bayes (NB), decision tree (DT)), ensemble classifiers (random forest (RF), XGBoost (XGB), LightGBM (LGBM), CatBoost (CAT)), and balanced classifiers (balanced bagging classifier (BBC), bal-anced random forest (BRF)). The criterion for selecting classifiers is based on the summary of [43] by reviewing 281 credit-risk-models-related articles (our experiment did not include deep learning algorithms such as artificial neural network [44] and convolutional neural network, although the summary mentioned them, and the reason for their exclusion is their complexity compared with individual classifiers and the higher risk of overfitting they entail; however, their performance does not surpass that of other classifiers [45]). Subsequently, all training sets were subjected to evaluation using these 11 classifiers.…”
Section: Framework and Evaluation Metricsmentioning
confidence: 99%
“…Figure 5a shows that the validation error is higher than the fitting error, and that the BWASD requires 10 iterations to optimize the NN structure. Particularly, BWASD returned N * = [0, 0, 3, 3, 4, 6] with c * = [4,3,3,4,2,3] and p * = 0.9 for the specific run, while MWASD returned 3,4,3,4]. That is, the NN trained under BWASD has 6 hidden layer neurons, while the NN trained under MWASD has 5.…”
Section: Datasetmentioning
confidence: 99%
“…Particularly, BWASD returned N * = [0, 0, 2, 2, 2, 5, 5] with c * = [2, 1, 1, 4, 4, 2, 2] and p * = 0.95 for the specific run, while MWASD returned N * = [0, 1, 2] with c * = [4,3,4]. That is, the NN trained under BWASD has 7 hidden layer neurons, while the NN trained under MWASD has 3.…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…Their study reveals a growing preference for advanced methods like ensembles and neural networks over traditional techniques like decision trees and logistic regression, often leading to better predictive results. Zhang and Yu (2024) offer an in-depth review of consumer credit risk assessment. They pinpoint a notable gap in research about data traits and stress the importance of multiscenario modeling in machine learning.…”
Section: Credit Scoring: An Overviewmentioning
confidence: 99%