2019
DOI: 10.1016/j.asoc.2019.01.047
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive comprehensive learning particle swarm optimization with cooperative archive

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 44 publications
(25 citation statements)
references
References 49 publications
0
23
0
Order By: Relevance
“…The eMuPSO algorithm was first implemented together with the LLS for optimizing the observation matrix and dynamic parameter estimation of all the six links of the 6DOF robot manipulator. The ideal parameters of the manipulator are given in (25)(26)(27)(28)(29)(30)(31)(32). Then the proposed algorithm was finally implemented in evaluating thirty-six benchmark functions including twenty-four variable-dimension benchmark functions and twelve constantdimension benchmark functions.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The eMuPSO algorithm was first implemented together with the LLS for optimizing the observation matrix and dynamic parameter estimation of all the six links of the 6DOF robot manipulator. The ideal parameters of the manipulator are given in (25)(26)(27)(28)(29)(30)(31)(32). Then the proposed algorithm was finally implemented in evaluating thirty-six benchmark functions including twenty-four variable-dimension benchmark functions and twelve constantdimension benchmark functions.…”
Section: Resultsmentioning
confidence: 99%
“…In [30] an external archive was employed to preserve the non-dominated solutions visited by the particles to enable evolutionary search strategies to exchange useful information among them, and [31] used a gridbased approach for the archiving process and ε-dominance method to update the archive, which helps the algorithm to increase the diversity of solutions. Reference [32] used the cooperative archive to exploit the valuable information of the current swarm and archive. Information about the elite particles from dynamic sub swarms was used in [33] to improve the following sub-swarm, while [32] introduced a new velocity updating technique that explores the external archive of non-dominated solutions in the current swarm.…”
Section: Elite Archivementioning
confidence: 99%
See 1 more Smart Citation
“…A comprehensive learning PSO (CLPSO) was proposed in [52] by leveraging the useful information contained in personal best positions of other non-fittest swarm members in order to update their current velocity and position. Inspired by CLPSO, an adaptive comprehensive learning PSO with cooperative archive (ACLPSO-CA) was proposed in [36]. An adaptive mechanism was designed to dynamically adjust the comprehensive learning probability of each ACLPSO-CA particle during the search process by referring to its current search performance.…”
Section: ) Modificaiton In Learning Strategymentioning
confidence: 99%
“…Different mechanisms were advocated to balance the exploration and exploitation strengths of PSO optimally in order to improve its overall performance in solving various types of complex optimization problems [36,37]. Parameter adaptation is a commonly used mechanism to adjust the exploration and exploitation strengths of PSO through the incorporation of various new algorithm-specific parameters.…”
Section: Introductionmentioning
confidence: 99%