2002
DOI: 10.1109/72.977291
|View full text |Cite
|
Sign up to set email alerts
|

Input feature selection for classification problems

Abstract: Feature selection plays an important role in classifying systems such as neural networks (NNs). We use a set of attributes which are relevant, irrelevant or redundant and from the viewpoint of managing a dataset which can be huge, reducing the number of attributes by selecting only the relevant ones is desirable. In doing so, higher performances with lower computational effort is expected. In this paper, we propose two feature selection algorithms. The limitation of mutual information feature selector (MIFS) i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
436
0
14

Year Published

2004
2004
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 811 publications
(462 citation statements)
references
References 12 publications
0
436
0
14
Order By: Relevance
“…Este procedimento fica ainda mais difícil no caso presente, por causa da pequena quantidade de dados. O algoritmo MIFS-U (Kwak & Choi, 2002), e os resultados obtidos, reduzem o problema a cálculos de informação mútua entre duas variáveis aleatórias.…”
Section: Tratamento Da Base De Dadosunclassified
“…Este procedimento fica ainda mais difícil no caso presente, por causa da pequena quantidade de dados. O algoritmo MIFS-U (Kwak & Choi, 2002), e os resultados obtidos, reduzem o problema a cálculos de informação mútua entre duas variáveis aleatórias.…”
Section: Tratamento Da Base De Dadosunclassified
“…Mutual information computation is straightforward for discrete (categorical) random variables where an exact solution can be obtained easily. However, for continuous random variables which are frequently encountered in mutual information computations, it is difficult to gain the exact solution since the computation of the exact probability density functions (pdfs) is impossible [21]. Hence, an estimation of the mutual information is required and different methods can be employed.…”
Section: Related Workmentioning
confidence: 99%
“…A variant of the MIFS method called the MIFS-U [21] emerged later to overcome the MIFS limitation which does not reflect relationships between feature and class label properly in its redundancy term if  is set too large. The MIFS-U approach brought a slight change to the righthand side term so that the MIFS criterion becomes…”
Section: Related Workmentioning
confidence: 99%
“…Then it would be possible to capture and reconstruct the underlying relationship between input-output data pairs. Within this respect, some model dependent approaches have been proposed [2][3][4][5][6].…”
Section: Input Selectionmentioning
confidence: 99%