2009
DOI: 10.1007/s12065-009-0018-z
|View full text |Cite
|
Sign up to set email alerts
|

Automated feature selection in neuroevolution

Abstract: Feature selection is a task of great importance. Many feature selection methods have been proposed, and can be divided generally into two groups based on their dependence on the learning algorithm/classifier. Recently, a feature selection method that selects features at the same time as it evolves neural networks that use those features as inputs called Feature Selective NeuroEvolution of Augmenting Topologies (FS-NEAT) was proposed by Whiteson et al. In this paper, a novel feature selection method called Feat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 30 publications
0
19
0
Order By: Relevance
“…We presented a novel feature-deselective classifier, FD-NEAT in [14]. FD-NEAT is based on NEAT, which is a method that uses genetic algorithms (GAs) to evolve the topology and the weights of networks that best fit the complexity of the task at hand.…”
Section: Feature Deselective Neuroevolution Of Augmenting Topologies mentioning
confidence: 99%
See 3 more Smart Citations
“…We presented a novel feature-deselective classifier, FD-NEAT in [14]. FD-NEAT is based on NEAT, which is a method that uses genetic algorithms (GAs) to evolve the topology and the weights of networks that best fit the complexity of the task at hand.…”
Section: Feature Deselective Neuroevolution Of Augmenting Topologies mentioning
confidence: 99%
“…FD-NEAT's performance was previously examined on several simple feature selection experiments [14], e.g. the classical "exclusive or" classification problem and maneuvering a robotic car around a race track by selecting relevant sensors in a race car simulator environment (RARS) [28].…”
Section: Feature Deselective Neuroevolution Of Augmenting Topologies mentioning
confidence: 99%
See 2 more Smart Citations
“…We empirically show that PFS-NEAT is able to learn near optimal control policies in both environments even as the number of irrelevant features increases. The algorithm is also able to identify feature subsets that are comprised of higher fractions of relevant sensors as compared to FS-NEAT [55], FD-NEAT [48], and SAFS-NEAT [33], which can cause PFS-NEAT to outperform these competitors. These experiments illustrate that PFS-NEAT is an effective feature selection algorithm that is suited to scaling up genetic policy search techniques to high-dimensional problems.…”
Section: Introductionmentioning
confidence: 99%