2012
DOI: 10.1016/j.eswa.2011.08.172
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection using Bayesian and multiclass Support Vector Machines approaches: Application to bank risk prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 27 publications
0
15
0
1
Order By: Relevance
“…Indeed, all the curves depicted in the top panels display the expected typical behavior; decreasing to reach a global minimum then increasing. This typical behavior reflects good performance of the stepwise algorithm and jointly attests good quality of the variable ranking [22,38]. This typical behavior is far from being realized with the SVR model especially when using the RFS hierarchy.…”
Section: Experiments On the Training Setsmentioning
confidence: 99%
“…Indeed, all the curves depicted in the top panels display the expected typical behavior; decreasing to reach a global minimum then increasing. This typical behavior reflects good performance of the stepwise algorithm and jointly attests good quality of the variable ranking [22,38]. This typical behavior is far from being realized with the SVR model especially when using the RFS hierarchy.…”
Section: Experiments On the Training Setsmentioning
confidence: 99%
“…This typical behavior reflects good performance of the stepwise algorithm and jointly attests good quality of the variables ranking. This behavior was deeply analyzed in the work of Ghattas and Ben Ishak (2008) for binary classification and more recently in the work of Feki et al (2012) for multiclass problems. Moreover, the OOB curve (OOB-MSE) seems a little bit optimistic than the RS curve (RS-MSE) obtained by random splitting for RF.…”
Section: Stepwise Curve Shapementioning
confidence: 99%
“…The set of variables leading to the model of smallest error rate is selected. Our stepwise algorithm has shown a promising results on classification problems even in the situations exposing the curse of dimensionality phenomenon, i.e., when the number of input variables p is very large compared to the sample size n (Ghattas and Ben Ishak, 2008;Feki et al, 2012).…”
Section: Variable Selection Algorithmsmentioning
confidence: 99%
“…Though χ2statistics is effective and has found wide applications [5], [6], it has two drawbacks. First, it only counts the document frequency for each feature, without consideration of frequency within documents, thus, it favours low-frequency features.…”
Section: Introductionmentioning
confidence: 99%