2020
DOI: 10.1109/tevc.2019.2944180
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Individuals for Multiple Peaks: A Novel Differential Evolution for Multimodal Optimization Problems

Abstract: Locating more peaks and refining the solution accuracy on the found peaks are two challenging issues in solving multimodal optimization problems (MMOPs). To deal with these two challenges, a distributed individuals differential evolution (DIDE) algorithm is proposed in this article based on a distributed individuals for multiple peaks (DIMP) framework and two novel mechanisms. First, the DIMP framework provides sufficient diversity by letting each individual act as a distributed unit to track a peak. Based on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4

Relationship

4
5

Authors

Journals

citations
Cited by 109 publications
(51 citation statements)
references
References 42 publications
0
42
0
Order By: Relevance
“…For future work, the algorithm proposed in this article will be applied to solve problems with more complicated challenges, such as large-scale [74], multi/many-objective [75], multimodal [76], dynamics [77], and constraint [78]. Moreover, the BS and LDG will be extended to more different types of surrogate models to further study their efficiency in improving the algorithm performance.…”
Section: Discussionmentioning
confidence: 99%
“…For future work, the algorithm proposed in this article will be applied to solve problems with more complicated challenges, such as large-scale [74], multi/many-objective [75], multimodal [76], dynamics [77], and constraint [78]. Moreover, the BS and LDG will be extended to more different types of surrogate models to further study their efficiency in improving the algorithm performance.…”
Section: Discussionmentioning
confidence: 99%
“…Some of the most widely used algorithms for text classification in machine learning are Logistic regression, Naïve Bayes, Support Vector Machine, Decision Tree, K-Nearest Neighbors, and Random Forest. All these disparate algorithms may suit different sorts of problem sets, the performance of the aforestated classifiers rely heavily on the feature engineering process [22]. The relevance and quality of the feature extracted are directly proportional to the performance of an algorithm.…”
Section: Background a Automatic Text Classificationmentioning
confidence: 99%
“…Zhao et al [51] borrowed the local binary operator idea in image processing to help efficiently form the niches. Chen et al [52] designed a distributed individual DE that treated each individual as a distributed niche to track peaks. These algorithms all use DE as the base algorithm to solve MMOPs and achieve a great success.…”
Section: ) Ga For Mmopsmentioning
confidence: 99%
“…In this paper, four accuracy levels () that  = 1E−01,  = 1E−02,  = 1E−03, and  = 1E−04 are adopted in the experiments. The results of  = 1E−04 are mainly reported as [49]- [52]. Besides, the NP, MaxFEs, and K of MaHDE adopt the same settings as [51].…”
Section: A Test Functions and Experimental Settingsmentioning
confidence: 99%