2010
DOI: 10.1016/j.asoc.2009.10.012
|View full text |Cite
|
Sign up to set email alerts
|

Automatically extracting T–S fuzzy models using cooperative random learning particle swarm optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
45
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 85 publications
(45 citation statements)
references
References 31 publications
0
45
0
Order By: Relevance
“…Cooperative random learning particle swarm optimization CRPSO, which was proposed by Zhao et al, employs several sub-swarms to search the space and the useful information is exchanged among them during the iteration process [15]. In CRPSO, at each iteration, velocity vectors are updated by…”
Section: Crpso-based Fuzzy Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…Cooperative random learning particle swarm optimization CRPSO, which was proposed by Zhao et al, employs several sub-swarms to search the space and the useful information is exchanged among them during the iteration process [15]. In CRPSO, at each iteration, velocity vectors are updated by…”
Section: Crpso-based Fuzzy Modelmentioning
confidence: 99%
“…CRPSO is a variant of standard particle swarm optimization (PSO). The structure and parameters of the fuzzy models are encoded into a particle and evolve together so that the optimal structure and parameters can be achieved simultaneously [15]. Extracted fuzzy if-then rules are easily interpreted by human and they might be evaluated by human expertise (rule evaluation).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus, researchers have to rely on some stochastic global optimization technologies such as particle swarm optimization algorithm (PSO) [1], differential evolution algorithm (DE) [2], genetic algorithm (GA) [3], harmony search algorithm (HS) [4,5] etc., because they do not need to compute the gradients of the objective function, and they do not require the continuity of problem variables. In consideration of the preferable advantages of these stochastic global optimization algorithms, researchers have payed closer attention to them recently, and they have implemented these technologies on many complex optimization problems including reliability problems [6,7], tile manufacturing process [8], adiabatic styrene reactor [9], Off-Centre bracing system [10], T-S fuzzy models [11] and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the typical practical finding that a general global optimization algorithm usually is much less efficient than specific versions tuned to the problem at hand, it is still of interest to gauge the baseline performance of a global optimization scheme using benchmark problems. Even in the most recent examples of such tests [1][2][3][4][5][6] (selected at random from the recent literature), it is customary to employ certain standard benchmark functions, with the implicit (but untested) assumption that the difficulty of these benchmark functions roughly matches that of real-world applications. Some of these benchmark functions even are advertised as particularly challenging.…”
Section: Introductionmentioning
confidence: 99%