2014 Science and Information Conference 2014
DOI: 10.1109/sai.2014.6918200
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection in meta learning framework

Abstract: Abstract-Feature selection is a key step in data mining. Unfortunately, there is no single feature selection method that is always the best and the data miner usually has to experiment with different methods using a trial and error approach, which can be time consuming and costly especially with very large datasets. Hence, this research aims to develop a meta learning framework that is able to learn about which feature selection methods work best for a given data set. The framework involves obtaining the chara… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…However, IG and χ 2 rank features based on the data characteristics only, independently from the classifier. Hence, to corroborate these rankings, feature selection was further conducted on the 10 features with relatively high IG as well as high χ 2 values by taking the classifier into account through what is known as a wrapper approach [25]. Specifically, a sequential forward feature selection was adopted to identify the top five features from among the 10 with relatively high IG and χ 2 values [21].…”
Section: Resultsmentioning
confidence: 99%
“…However, IG and χ 2 rank features based on the data characteristics only, independently from the classifier. Hence, to corroborate these rankings, feature selection was further conducted on the 10 features with relatively high IG as well as high χ 2 values by taking the classifier into account through what is known as a wrapper approach [25]. Specifically, a sequential forward feature selection was adopted to identify the top five features from among the 10 with relatively high IG and χ 2 values [21].…”
Section: Resultsmentioning
confidence: 99%
“…This project also introduced the concept of landmarking, which adopts the idea of using a simple learning algorithm to understand the place of a specific problem space. The idea of landmarking is extended to relative landmarking (the relative order between landmarking algorithms) and sub-sampling, which is the use of a landmark on a sample of the data is used in Shilbayeh and Vadera (2014), Alcobaça et al (2020) and Shen et al (2020).…”
Section: Meta-learning Frameworkmentioning
confidence: 99%
“…As more recent studies, we can cite [42], in which a study was conducted for the construction of a symbolic recommendation model of the best Feature Selection algorithm. In addition, a lazy method was presented in [43] for the recommendation of Feature Selection algorithms, while a metalearning framework was developed in [44] to learn which Feature Selection algorithms are more suitable for a given dataset. The authors in [45] used five different categories of state-of-the-art meta-features to characterize datasets, and built a different regression model to connect datasets to each candidate algorithm.…”
Section: Related Workmentioning
confidence: 99%