2022
DOI: 10.1109/lra.2022.3190086
|View full text |Cite
|
Sign up to set email alerts
|

Arena-Bench: A Benchmarking Suite for Obstacle Avoidance Approaches in Highly Dynamic Environments

Abstract: The ability to autonomously navigate safely, especially within dynamic environments, is paramount for mobile robotics. In recent years, DRL approaches have shown superior performance in dynamic obstacle avoidance. However, these learning-based approaches are often developed in specially designed simulation environments and are hard to test against conventional planning approaches. Furthermore, the integration and deployment of these approaches into real robotic platforms are not yet completely solved. In this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
34
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(34 citation statements)
references
References 27 publications
0
34
0
Order By: Relevance
“…This work extends our previous works Arena-Bench [1] and Arena-Rosnav [2], where a platform for developing and training DRL agents within a resource-efficient 2D environment was proposed and tools were provided to benchmark the trained agent against several model-based and learning based approaches on different highly dynamic scenarios respectively. DRL based approaches have shown remarkable results for navigation in highly dynamic environments and a variety of research works incorporated DRL into their systems [4], [5], [6] [7], [8], [9].…”
Section: Related Workmentioning
confidence: 69%
See 2 more Smart Citations
“…This work extends our previous works Arena-Bench [1] and Arena-Rosnav [2], where a platform for developing and training DRL agents within a resource-efficient 2D environment was proposed and tools were provided to benchmark the trained agent against several model-based and learning based approaches on different highly dynamic scenarios respectively. DRL based approaches have shown remarkable results for navigation in highly dynamic environments and a variety of research works incorporated DRL into their systems [4], [5], [6] [7], [8], [9].…”
Section: Related Workmentioning
confidence: 69%
“…In recent years, Deep Reinforcement Learning (DRL) for navigation in dynamic environments has achieved remarkable results and was applied by a variety of researchers. However, several works have pointed out downsides and challenges when working and developing DRL approaches [3] [1]. In particular, development is still a high hurdle to overcome, the training process is oftentimes difficult and tedious, and comparability with other approaches is not trivial.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…DRL has emerged as an end-to-end approach with the potential to learn complex behavior in unknown environments. In the field of robot navigation, DRL-based approaches have shown promising results [11], [12], [1], [13]. Wen et al propose the use of DDPG to plan the trajectory of a robot arm to realize obstacle avoidance [14] based purely on DRL.…”
Section: Related Workmentioning
confidence: 99%
“…Similar works by Faust et al [16] and Chiang et al [2] combine DRL-based motion planning with classic approaches such as RRT and PRM for the motion planning of ground robots over long distances. More recently, works by Kästner et al [11], Dugas et al [17], and Guldenring et al [18] showed the superiority of DRL approaches for fast obstacle avoidance in unknown and dynamic environments. DRL-based approaches have also been utilized in a number of research works for motion planning and collision avoidance for stationary robots.…”
Section: Related Workmentioning
confidence: 99%