2009
DOI: 10.1007/s11704-009-0004-8
|View full text |Cite
|
Sign up to set email alerts
|

Pareto analysis of evolutionary and learning systems

Abstract: Abstract-This paper attempts to argue that most adaptive systems, such as evolutionary or learning systems, have inherently multiple objectives to deal with. Very often, there is no single solution that can optimize all the objectives. In this case, the concept of Pareto optimality is key to analyzing these systems.To support this argument, we first present an example that considers the robustness and evolvability trade-off in a redundant genetic representation for simulated evolution. It is well known that ro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…Solutions exhibiting little sensitivity to such variations are labelled as robust and are favored. Robustness is usually addressed either by optimizing the expectation and the variance , or by introducing additional constraints . The topic was reviewed by Jin and Branke and Beyer and Sendhoff .…”
Section: Multiobjective Optimizationmentioning
confidence: 99%
“…Solutions exhibiting little sensitivity to such variations are labelled as robust and are favored. Robustness is usually addressed either by optimizing the expectation and the variance , or by introducing additional constraints . The topic was reviewed by Jin and Branke and Beyer and Sendhoff .…”
Section: Multiobjective Optimizationmentioning
confidence: 99%
“…The knee point is not located at the extremes and is of interest because it is believed that the complexity of the models in this region of the Pareto front matches that of the data [14,48] and that the models will not exhibit over-fitting on a validation data set. It is also hypothesized that models in this area will exhibit a smaller prediction variance.…”
mentioning
confidence: 99%
“…This thesis proposes a uniĄed framework for multi-objective modelling and training being applied to a vast set of classiĄcation tasks in machine learning. Despite the fact that multi-objective optimization methods are not widely used in machine learning, the literature is vast in approaches and scenarios being used to search for both interpretable and accurate models, models generated to have complementary properties, and conĆicting loss functions building ensembles (BRAGA et al, 2006;SENDHOFF, 2008;JIN et al, 2009). It is also used to model selection, ensemble generation, Ąltering, and aggregation (ZHOU, 2012); to the classiĄcation of imbalanced datasets (AKAN; SAYIN, 2014;GARCÍA et al, 2010); and to multi-task learning (BAGHERJEIRAN, 2007).…”
Section: Summarizing Commentsmentioning
confidence: 99%
“…Out of the scope of multi-task learning, a possible solution for the weighting problem involving the loss functions of the whole set of tasks was proposed in Engen et al (2009), Wang et al (2014), approaching a multi-class classiĄcation by considering the minimization of the multiple learning losses, one for each class, as conĆicting objectives, thus resorting to a multi-objective optimization method. Supported by other scenarios when multi-objective optimization methods were used to solve machine learning problems (JIN; SENDHOFF, 2008;JIN et al, 2009), this work also conceives the learning losses as conĆicting objectives, but now under the framework of multi-task learning and explicitly adopting parameter sharing, a perspective that from the best of our knowledge still was unexplored.…”
Section: Multi-task Learningmentioning
confidence: 99%