2021 4th International Conference on Computing and Communications Technologies (ICCCT) 2021
DOI: 10.1109/iccct53315.2021.9711896
|View full text |Cite
|
Sign up to set email alerts
|

Credit Risk Analysis using LightGBM and a comparative study of popular algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 11 publications
1
3
0
Order By: Relevance
“…Moreover, the processing time of LightGBM is always around one minutes and while tuning parameters it takes less time than Random Forest although the number of combinations is larger than Random Forest. The result found is consistent with previous findings of the key reference (Chada, 2019), as well as other research in the literature (Ge et al, 2020;Ke et al, 2017;Ponsam et al, 2021).…”
Section: Discussionsupporting
confidence: 93%
“…Moreover, the processing time of LightGBM is always around one minutes and while tuning parameters it takes less time than Random Forest although the number of combinations is larger than Random Forest. The result found is consistent with previous findings of the key reference (Chada, 2019), as well as other research in the literature (Ge et al, 2020;Ke et al, 2017;Ponsam et al, 2021).…”
Section: Discussionsupporting
confidence: 93%
“…Machine learning techniques including logistic regression, kNN, k-means clustering, SVM, neural networks, decision trees, naïve Bayes, XGBoost, catboost, adaboost, LightGBM had been used recently in credit risk analysis applied to various dataset (Pandey et al, 2016, Qiu et al, 2019, Menendez, 2019, Pillai et al, 2019, Tian et al, 2020, Wang et al, 2020a, Wang et al, 2020b, Turjo et al, 2021, Ponsam, et al, 2021, Guo and Zhou, 2022.…”
Section: Introductionmentioning
confidence: 99%
“…LightGBM adopts a histogram optimization strategy, sorting the features in each dimension of the sample before training, and then dividing the feature histogram. In subsequent training, the algorithm only needs to use the histogram as a "feature" for constructing the decision tree, which significantly reduces the number of traversals of the sample set [7]. The algorithm is shown in Figure 5…”
Section: Introduction To the Model And Its Conceptsmentioning
confidence: 99%