2018
DOI: 10.1109/tnnls.2016.2634548
|View full text |Cite
|
Sign up to set email alerts
|

Experienced Gray Wolf Optimization Through Reinforcement Learning and Neural Networks

Abstract: In this paper, a variant of gray wolf optimization (GWO) that uses reinforcement learning principles combined with neural networks to enhance the performance is proposed. The aim is to overcome, by reinforced learning, the common challenge of setting the right parameters for the algorithm. In GWO, a single parameter is used to control the exploration/exploitation rate, which influences the performance of the algorithm. Rather than using a global way to change this parameter for all the agents, we use reinforce… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 160 publications
(59 citation statements)
references
References 26 publications
0
59
0
Order By: Relevance
“…Another different approach was used in [40], where researchers used the Minimum Redundancy Maximum Relevance (MRMR) initialization which combines relevance with the target class and redundancy to other features.…”
Section: Initialize Populationmentioning
confidence: 99%
See 1 more Smart Citation
“…Another different approach was used in [40], where researchers used the Minimum Redundancy Maximum Relevance (MRMR) initialization which combines relevance with the target class and redundancy to other features.…”
Section: Initialize Populationmentioning
confidence: 99%
“…Composed weighted fitness function is also used in the GWO algorithm [40], which comprises an error rate and a number of selected features.…”
Section: Fitness Functionmentioning
confidence: 99%
“…Because gradient descent as a part of the BP algorithm can be easily trapped into the local minimum, GOA is employed as a stochastic algorithm to escape the local minima . Figure shows the architecture of the GOA‐based CNN.…”
Section: Segmentation By Goa‐based Cnnmentioning
confidence: 99%
“…Where t is the iteration number and Max iter is the total number of iteration allowed for the optimization [9]. The pseudo code of the GWO algorithm is displayed in igure 1.…”
Section: Mathematical Modellingmentioning
confidence: 99%
“…At any given time, all solutions are at a corner of a hypercube and the solutions are grouped in binary form. Based on the basic GWO algorithm, the given wolf positions are updated, a binary restriction must be maintained according to equation (9).…”
Section: Binary Gray Wolf Optimization (Bgwo)mentioning
confidence: 99%