2010
DOI: 10.3724/sp.j.1087.2010.01883
|View full text |Cite
|
Sign up to set email alerts
|

Modified immune particle swarm optimization algorithm and its application

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…If the current optimal position is not the global optimal solution, the particle swarm can't search again in the solution space. Algorithm will be falling into local optimal solution, thus it will appear the so-called premature convergence phenomenon [12,13].…”
Section: Basic Particle Swarm Optimization Algorithm (Pso) and Itmentioning
confidence: 99%
See 1 more Smart Citation
“…If the current optimal position is not the global optimal solution, the particle swarm can't search again in the solution space. Algorithm will be falling into local optimal solution, thus it will appear the so-called premature convergence phenomenon [12,13].…”
Section: Basic Particle Swarm Optimization Algorithm (Pso) and Itmentioning
confidence: 99%
“…Therefore appropriate learning factors can accelerate convergence speed, it can avoid to falling into the local optimal solution as well. Learning factors updating formula can be represented as: (12) There is no doubt that the velocity needs changing when it is too high (such as ), which can avoid the particles flying out of target area. The velocity updating formula is represented as:…”
Section: The Improved Particle Swarm Optimization Algorithm(mlammentioning
confidence: 99%
“…In order to make smile image, the lips corners should rise. So, we first generate two triangles with vertices (1, 2, 4) and (2,3,4). Then, to warp the two triangles, some other triangles should be defined outside them.…”
Section: Expression Deformationmentioning
confidence: 99%
“…The learning factors c 1 and c 2 are non-negative constants which represent the weight of the particle preferences, c 1 represents the preference of their own experience, c 2 represents the preference of the group experience. According to the experience, c 1 and c 2 are set to 2.05 in practice [12]. The random numbers r 1 and r 2 get the values between (0, 1).…”
Section: Particle Swarm Optimization Algorithmmentioning
confidence: 99%