2017
DOI: 10.1111/exsy.12217
|View full text |Cite
|
Sign up to set email alerts
|

A feature selection enabled hybrid‐bagging algorithm for credit risk evaluation

Abstract: Hybrid models based on feature selection and machine learning techniques have significantly enhanced the accuracy of standalone models. This paper presents a feature selection‐based hybrid‐bagging algorithm (FS‐HB) for improved credit risk evaluation. The 2 feature selection methods chi‐square and principal component analysis were used for ranking and selecting the important features from the datasets. The classifiers were built on 5 training and test data partitions of the input data set. The performance of t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(23 citation statements)
references
References 56 publications
0
20
0
Order By: Relevance
“…Real-time datasets are susceptible to various quality issues, such as missing values, different data structures, data redundancy, and imbalanced data [45]. Herein, standard preprocessing operations are applied to the data.…”
Section: Data Processing and Feature Extractionmentioning
confidence: 99%
“…Real-time datasets are susceptible to various quality issues, such as missing values, different data structures, data redundancy, and imbalanced data [45]. Herein, standard preprocessing operations are applied to the data.…”
Section: Data Processing and Feature Extractionmentioning
confidence: 99%
“…In a complete data set, ensemble approaches can improve the classification performance [21]. Bootstrapping or bagging is a popular ensemble learning approach where data is re-sampled with substitution several times [3,7]. The reason for good performance of bagging is that it creates multiple datasets which lead to diverse and accurate classifiers.…”
Section: Bagging and MI Ensemblementioning
confidence: 99%
“…Baesens (2003) compared various classifiers on eight data sets with 17 classifiers. Ensembles of classifiers also emerged as novel classification approaches and have since been improved and combined (Dahiya, Handa, & Singh, 2017). In another study, Baesens studied the new benchmarking analyses of credit scoring algorithms (Lessmann et al, 2015), including novel learning methods and new data sets.…”
Section: Literature Reviewmentioning
confidence: 99%