2003
DOI: 10.1007/3-540-45105-6_109
|View full text |Cite
|
Sign up to set email alerts
|

Natural Coding: A More Efficient Representation for Evolutionary Learning

Abstract: Abstract.To select an adequate coding is one of the main problems in applications based on Evolutionary Algorithms. Many codings have been proposed to represent the search space for obtaining decision rules. A suitable representation of the individuals of the genetic population can reduce the search space, so that the learning process is accelerated by decreasing the number of necessary generations to complete the task. In this sense, natural coding achieves such reduction and improves the results obtained by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2004
2004
2017
2017

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 13 publications
0
14
0
Order By: Relevance
“…The first one is called Standard-Hider and is the version presented in [9]. The second one is the same algorithm but it incorporates the self-setting of mutation probability according to the previous section.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The first one is called Standard-Hider and is the version presented in [9]. The second one is the same algorithm but it incorporates the self-setting of mutation probability according to the previous section.…”
Section: Resultsmentioning
confidence: 99%
“…As we will see in Section 7, two versions of this algorithm will be used: the original algorithm, without changes, and an adapted version to include the feature influence across the evolutionary process. The reader can find a detailed description of Hider in [1,9].…”
Section: Hidermentioning
confidence: 99%
See 1 more Smart Citation
“…Some methods use some kind of discretization [1,21,24] to convert the continuous attributes to categorical ones, which allows the usage of categorical knowledge representations [18,39]. A different alternative is to use rules based on fuzzy logic.…”
Section: Related Workmentioning
confidence: 99%
“…In Table XI, we compare the results obtained by ECL on the propositional datasets against the results onbtained by four nonevolutionary systems for ICL, C4.5 [24], Naive Bayes [25], SMO [26], and IB1 [27], and two evolutionary algorithms, HIDER* [28] and GAssist [29]. C4.5 is a decision tree algorithm, Naive Bayes uses the Baye's rule of conditional probabilities to estimate the predicted class, SMO implements the sequential minimal optimization algorithm for training a support vector classifier, and IB1 uses a simple distance measure to find the training instance closest to the given test instance, and predicts the same class as this training instance.…”
Section: Comparison With Other Systemsmentioning
confidence: 99%