2010
DOI: 10.1504/ijbic.2010.036158
|View full text |Cite
|
Sign up to set email alerts
|

Correlation based feature selection method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(13 citation statements)
references
References 11 publications
0
13
0
Order By: Relevance
“…Even with the increasing missing rates up to 50%, the proposed method performed better than SVM-FCM. This is because FPCA reduces the number of dataset dimensions compared to the full dataset [28], thus allowing FCM to perform better. Table 7, we can see that the average of errors between actual value and estimated value are minimal.…”
Section: Rmsementioning
confidence: 99%
“…Even with the increasing missing rates up to 50%, the proposed method performed better than SVM-FCM. This is because FPCA reduces the number of dataset dimensions compared to the full dataset [28], thus allowing FCM to perform better. Table 7, we can see that the average of errors between actual value and estimated value are minimal.…”
Section: Rmsementioning
confidence: 99%
“…The training data and misclassified points (shown in dashed lines) can be visualized on the parallel coordinates plot. These features can also be selected sequentially or by using Correlation Feature Selection Method based filtering [15] or by using predictors transformed by principal component analysis to design accurate yet lightweight classifiers.…”
Section: Resultsmentioning
confidence: 99%
“…As a result, the size of the hypothesis space will be reduced, classifiers will operate faster and more effectively, and classification accuracy will improve. Algorithms that perform feature selection can generally be divided into two categories, namely the wrappers (Michalak and Kwasnicka 2010) and the filters (Kohavi and John 1997). The filters, unlike the wrappers, operate independent of any learning algorithm.…”
Section: Variable Selection In Datasetsmentioning
confidence: 99%