2020
DOI: 10.1109/tfuzz.2020.2968863
|View full text |Cite
|
Sign up to set email alerts
|

Multitasking Genetic Algorithm (MTGA) for Fuzzy System Optimization

Abstract: Multi-task learning uses auxiliary data or knowledge from relevant tasks to facilitate the learning in a new task. Multitask optimization applies multi-task learning to optimization to study how to effectively and efficiently tackle multiple optimization problems simultaneously. Evolutionary multi-tasking, or multi-factorial optimization, is an emerging subfield of multitask optimization, which integrates evolutionary computation and multi-task learning. This paper proposes a novel and easy-toimplement multi-t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 70 publications
(17 citation statements)
references
References 25 publications
(43 reference statements)
0
12
0
Order By: Relevance
“…Example of this specific trend is the well-known assortative mating, which can be materialized through i) a common crossover operation as in MFEA, MFEA-II and many other MFO techniques [18,31,27]; ii) based on mutation strategies as in DE inspired techniques [98,72,73]; or iii) the velocity based movements of PSO inspired methods [69,75,97]. Another commonly used mechanism for conducting intra-task knowledge transfer is the one used by MM methods, in which multiple populations coexists, each one devoted to the resolution of one specific task [25,105,107,114,26]. In that cases, the knowledge sharing is conducted mainly by migrating solutions among subpopulations, modifying in this way the optimizing task of individuals.…”
Section: Current Methodological Trends In Evolutionary Multitask Opti...mentioning
confidence: 99%
See 1 more Smart Citation
“…Example of this specific trend is the well-known assortative mating, which can be materialized through i) a common crossover operation as in MFEA, MFEA-II and many other MFO techniques [18,31,27]; ii) based on mutation strategies as in DE inspired techniques [98,72,73]; or iii) the velocity based movements of PSO inspired methods [69,75,97]. Another commonly used mechanism for conducting intra-task knowledge transfer is the one used by MM methods, in which multiple populations coexists, each one devoted to the resolution of one specific task [25,105,107,114,26]. In that cases, the knowledge sharing is conducted mainly by migrating solutions among subpopulations, modifying in this way the optimizing task of individuals.…”
Section: Current Methodological Trends In Evolutionary Multitask Opti...mentioning
confidence: 99%
“…The same trend is also adopted in [106], in which a MM method named as Differential Evolutionary Multitask Optimization is proposed, in which the knowledge sharing is conducted through the migration of individuals among populations. A similar philosophy is followed in the Multitasking Genetic Algorithm modeled in [107], in which a population of solutions is created for each optimizing problem, and the knowledge sharing is realized at each iteration through the transference of different chromosomes among populations.…”
Section: Explicit Knowledge Transfer Based Static Solversmentioning
confidence: 99%
“…In recent years, great efforts have been made to improve FSs [8]. Evolutionary algorithms (EAs) [9,10], the gradient descent (GD) algorithm [11], and GD plus least squares estimation (LSE) [12] have been proposed to optimize FSs. Although EAs can search for the optimal solution with enough iterations, the computational cost is too expensive to be suitable for the optimization of FSs.…”
Section: Introductionmentioning
confidence: 99%
“…Many data-driven algorithms have been proposed to tune TSK fuzzy systems [6]- [13]. Optimizing a TSK fuzzy system involves fine-tuning both the antecedent parameters and the consequent parameters, which can be done separately or simultaneously [7], [8], [10]- [17]. When they are optimized separately, the consequent parameters are usually obtained by least squares estimation (LSE), e.g., in an adaptive neuro fuzzy inference system (ANFIS) [10].…”
Section: Introductionmentioning
confidence: 99%