2016
DOI: 10.1504/ijdats.2016.077484
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection in accident data: an analysis of its application in classification algorithms

Abstract: Feature selection is aimed to select a reducing number of subset features with high predictive information and remove irrelevant features with minimal predictive information. In this paper, we propose an ensemble approach for selecting features, using multiple feature selection techniques and combining the same to yield more robust and stable results. Multiple feature ranking techniques assemblage is performed in two steps. The first step necessitates creating a set of different feature selectors while the sec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…Without feature selection, classification of text data is incomplete because selection of good features strengthen the performance of a classifier. Much research work has been done on feature selection (Kalaivani and Shunmuganathan, 2016;Lin and Chen, 2012;Sarkar, 2016;Lee and Kim, 2015;Seetha et al, 2015;Roul et al, 2015Roul et al, , 2016aMokeddem et al, 2016;Azam and Yao, 2012;Roul et al, 2016b). Information theory-based feature selection is very effective in terms of computational cost, dimensionality, scalability and classifier's independence.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Without feature selection, classification of text data is incomplete because selection of good features strengthen the performance of a classifier. Much research work has been done on feature selection (Kalaivani and Shunmuganathan, 2016;Lin and Chen, 2012;Sarkar, 2016;Lee and Kim, 2015;Seetha et al, 2015;Roul et al, 2015Roul et al, , 2016aMokeddem et al, 2016;Azam and Yao, 2012;Roul et al, 2016b). Information theory-based feature selection is very effective in terms of computational cost, dimensionality, scalability and classifier's independence.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Gunavathi and Premalatha (2015) used cuckoo search optimisation techniques for finding best features from the original dataset. Sarkar and Sahoo (2016) proposed ensemble approach for selecting best features among original features. Zheng et al (2010) used feature clustering techniques for finding best subset features.…”
Section: Introductionmentioning
confidence: 99%