The 2003 Congress on Evolutionary Computation, 2003. CEC '03.
DOI: 10.1109/cec.2003.1299394
|View full text |Cite
|
Sign up to set email alerts
|

Learning, evolution and generalisation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…A model is overfit if it is specifically tuned to training instances, which means the model will be accurate at predicting the values of those instances, but poor at predicting values of unseen instances. Conventional machine learning uses learning as a paradigm, and evolutionary computation uses evolution as a paradigm; learning is adaptation at the individual level, and evolution is adaptation at the population level [52]. To prevent overfitting, one usually just splits the instance space between training and testing, to validate training models.…”
Section: Chapter 2 Background and Related Workmentioning
confidence: 99%
“…A model is overfit if it is specifically tuned to training instances, which means the model will be accurate at predicting the values of those instances, but poor at predicting values of unseen instances. Conventional machine learning uses learning as a paradigm, and evolutionary computation uses evolution as a paradigm; learning is adaptation at the individual level, and evolution is adaptation at the population level [52]. To prevent overfitting, one usually just splits the instance space between training and testing, to validate training models.…”
Section: Chapter 2 Background and Related Workmentioning
confidence: 99%
“…While learning takes place based on this prediction error, the generality of the resulting function is important. For this reason, it is necessary not only to carry out learning using training data, but also to evaluate the resulting function with the test data [6]. The number of training data sets provided was 1,210.…”
Section: Task Settingmentioning
confidence: 99%