2021
DOI: 10.1002/rob.22052
|View full text |Cite
|
Sign up to set email alerts
|

Highly optimized Q‐learning‐based bees approach for mobile robot path planning in static and dynamic environments

Abstract: This paper proposes a new novel approach to find an optimal path for a mobile robot in a two‐dimensional environment. Finding the optimal path is done using the Bees Algorithm (BA) with the Q‐Learning Algorithm. A new method to build the initial population is proposed to find the initial population regardless of the number and location of obstacles in the environment. Q‐Learning is implemented as a local search function of the BA. The hybridization of the BA and the Q‐Learning aims to find the optimal path wit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 29 publications
0
8
0
Order By: Relevance
“…Similarly, the current path planning based on Q-Learning [31,32] is still subject to many obstacles in the application, such as the need for a huge amount of simulation and feedback to generate the path. When considering the synergy of multi-agents, Q-Learning can achieve a comprehensive and better effect.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Similarly, the current path planning based on Q-Learning [31,32] is still subject to many obstacles in the application, such as the need for a huge amount of simulation and feedback to generate the path. When considering the synergy of multi-agents, Q-Learning can achieve a comprehensive and better effect.…”
Section: Literature Reviewmentioning
confidence: 99%
“…This article uses Qlearning [9] to find the robot's action to take optimization measures. Bonny et al [10] presented a new method to find the best path for mobile robots in a two-dimensional environment. Use the bees algorithm (BA) and Q-learning algorithm to find the best path.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, the sampling method arranges the error values δ j in descending order and then calculates the sampling probability, as shown in (10). If we perform nonuniform sampling, we should adjust the learning rate according to the sampling probability α.…”
Section: B Q-learning Based On Prioritized Weightmentioning
confidence: 99%
“…From enhancing smart microgrids' efficiency and sustainability [9][10][11][12][13][14][15] to pioneering methods for sustainable electrical power generation from biogas [16][17][18], and improving the reliability and performance of battery systems in hybrid electric vehicles [19][20][21][22][23][24], metaheuristic algorithms are reshaping the landscape of technological and environmental solutions. Their role extends to optimizing renewable energy systems, enhancing climate control, and advancing robotic systems through precise control and estimation capabilities [25][26][27][28][29][30][31][32][33][34][35][36][37][38]. Furthermore, the development of *malshabi@sharjah.ac.ae hybrid estimation-based techniques and robust control strategies for various applications, including partial discharge localization [39][40][41] and flexible link manipulators [42][43][44][45][46], demonstrates the algorithms' broad applicability and robustness.…”
Section: Introductionmentioning
confidence: 99%