2017
DOI: 10.1016/j.eswa.2016.12.020
|View full text |Cite
|
Sign up to set email alerts
|

A comparative study on base classifiers in ensemble methods for credit scoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
133
0
2

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 200 publications
(137 citation statements)
references
References 46 publications
2
133
0
2
Order By: Relevance
“…The labels for the new examples are selected with a probability that is inversely proportional to the prediction of the current ensemble. Decorate tries to maximize the diversity of the base classifiers by adding new artificial examples and re-weighting the training data [14] [19].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The labels for the new examples are selected with a probability that is inversely proportional to the prediction of the current ensemble. Decorate tries to maximize the diversity of the base classifiers by adding new artificial examples and re-weighting the training data [14] [19].…”
Section: Methodsmentioning
confidence: 99%
“…The base algorithm is used to create a different base model instance for each bootstrap sample, and the ensemble output is the average of all base model outputs for a given input [14] Decorate (Diverse Ensemble Creation by Oppositional Relabeling of Artificial Training Examples) iteratively generates an ensemble by learning a new classifier at each iteration. In the first iteration the base classifier is built from the given training data set and each successive classifier is built from an artificially generated training data set which is the result of the union of the original training data and artificial training examples, known as diversity data.…”
Section: Methodsmentioning
confidence: 99%
“…A bagging scheme that uses a type of credal tree different from the CDT presented in [15] will be described in this work. This new model achieves better results than the bagging of CDT shown in [20] when data sets with added noise are classified.…”
Section: Introductionmentioning
confidence: 99%
“…In the last years, it has been checked that the CDT model presents good experimental results in standard classification tasks (see Abellán and Moral [18] and Abellán and Masegosa [19]). The bagging scheme, using CDT as base classifier, has been used for the particular task of classifying data sets about credit scoring (see Abellán and Castellano [20]). A bagging scheme that uses a type of credal tree different from the CDT presented in [15] will be described in this work.…”
Section: Introductionmentioning
confidence: 99%
“…Second, feature selection is another important data preprocessing step, and considered good practice in the domain of bankruptcy prediction (Abellán & Castellano, 2017;Tsai, 2009). Therefore, correlationbased feature selection (Hall, 2000) is applied, a basic filter feature selection approach that has seen prior applications in BFP literature (e.g.…”
Section: Asset Turnovermentioning
confidence: 99%