2020
DOI: 10.1109/access.2020.3023200
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative Pathfinding Based on Memory-Efficient Multi-Agent RRT*

Abstract: In cooperative pathfinding problems, non-conflict paths that bring several agents from their start location to their destination need to be planned. This problem can be efficiently solved by Multiagent RRT*(MA-RRT*) algorithm, which is still state-of-the-art in the field of coupled methods. However, the implementation of this algorithm is hindered in systems with limited memory because the number of nodes in the tree of RRT* grows indefinitely as the paths get optimized. This paper proposes an improved version… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…i }, where t i is the number of time-steps required for an agent a i to reach its goal position and remain there. Thus, there exists a minimum t i such that s (t) i = g i for each t >= t i [37] [38]. Makespan indicates the maximum t i among all the agents, which can be defined as, max 1≤i≤k t i [14].…”
Section: A Multi-agent Path Findingmentioning
confidence: 99%
“…i }, where t i is the number of time-steps required for an agent a i to reach its goal position and remain there. Thus, there exists a minimum t i such that s (t) i = g i for each t >= t i [37] [38]. Makespan indicates the maximum t i among all the agents, which can be defined as, max 1≤i≤k t i [14].…”
Section: A Multi-agent Path Findingmentioning
confidence: 99%
“…Ragaglia et al 8 extended Poli‐RRT* to a multiagent cooperative setting where multiple vehicles shared the same environment and needed to avoid each other besides some static obstacles. To improve the efficiency of cooperative pathfinding in dense environments, Jiang and Wu 9 improved the MA‐RRT* by introducing the Potential Field method into the sampling process and have produced MA‐RRT*PF algorithms. The advantage of this type of method is that it can search high‐dimensional space quickly without modeling the environment.…”
Section: Related Workmentioning
confidence: 99%
“…Typically, each agent makes its own decisions only based on the partially observed information from an onboard sensor and probably knows nothing about the policies and intents of other agents. The partial observability makes the sample‐based approaches inapplicable, 5–9 because it is difficult to sample satisfied policies for all the agents simultaneously within a large search space. When faced with a dynamic circumstance, the decision‐making process becomes even more difficult.…”
Section: Introductionmentioning
confidence: 99%
“…SBPs are applicable to a wide range of applications. Example include planning with arbitrary cost maps (Iehl et al, 2012), cooperative multi-agent planning (Jiang & Wu, 2020), and planning in dynamic environments (Yershova et al, 2005). On the one hand, researchers have focused on the algorithmic side of improving the graph or tree building (Elbanhawi & Simic, 2014;Klemm et al, 2015;Lai et al, 2019;Lai & Ramos, 2021b;Zhong & Su, 2012).…”
Section: Introductionmentioning
confidence: 99%