2023
DOI: 10.3390/math11112476
|View full text |Cite
|
Sign up to set email alerts
|

Optimized-Weighted-Speedy Q-Learning Algorithm for Multi-UGV in Static Environment Path Planning under Anti-Collision Cooperation Mechanism

Abstract: With the accelerated development of smart cities, the concept of a “smart industrial park” in which unmanned ground vehicles (UGVs) have wide application has entered the industrial field of vision. When faced with multiple tasks and heterogeneous tasks, the task execution efficiency of a single UGV is inefficient, thus the task planning research under multi-UGV cooperation has become more urgent. In this paper, under the anti-collision cooperation mechanism for multi-UGV path planning, an improved algorithm wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…Subsequently, Ref. [81] introduces a novel update method for the Q-function, proposing an enhanced Q-learning algorithm tailored for multi-vehicle cooperative obstacle avoidance in static environments. This refined algorithm facilitates the identification of optimal paths within the same iteration, resulting in shortened final path trajectories.…”
Section: Cooperative Planning Methods Of Multi-vehicle Formationmentioning
confidence: 99%
“…Subsequently, Ref. [81] introduces a novel update method for the Q-function, proposing an enhanced Q-learning algorithm tailored for multi-vehicle cooperative obstacle avoidance in static environments. This refined algorithm facilitates the identification of optimal paths within the same iteration, resulting in shortened final path trajectories.…”
Section: Cooperative Planning Methods Of Multi-vehicle Formationmentioning
confidence: 99%
“…Findings show that MMACO-DE reduces distance and saves time by taking lesser turns in a 3D complex environment. Optimized-weighted-speedy Q-learning (OWS Q-learning) algorithm and a collision avoidance cooperation method are suggested for multiple UGVs [28]. Table 3 summarizes the abovementioned literature in tabular form and provides a comparative analysis of the deployed platforms, applied obstacle avoidance protocols, considered obstacles and evaluation indexes, and finally the resulting performance in each study.…”
Section: Obstacle Avoidance Protocolmentioning
confidence: 99%
“…In the SLAM navigation process, SLAM (Simultaneous Localization and Mapping) can help the robot determine its position in an unknown environment and build the corresponding environment map, and path planning can help the robot plan its movement route on the basis of SLAM. Path planning is the key for mobile robots to realize autonomous navigation, which mainly refers to the fact that in an environment with obstacles, mobile robots can find a suitable collision-free path from the starting point to the target point according to certain evaluation criteria [3,4]. Mobile robots are typically used in complex environments where static and dynamic obstacles are present, and the path-planning results have a direct impact on the efficiency and safety of the robot [5].…”
Section: Introductionmentioning
confidence: 99%