2021
DOI: 10.1109/lra.2020.3048662
|View full text |Cite
|
Sign up to set email alerts
|

Embodied Visual Navigation With Automatic Curriculum Learning in Real Environments

Abstract: We present NavACL, a method of automatic curriculum learning tailored to the navigation task. NavACL is simple to train and efficiently selects relevant tasks using geometric features. In our experiments, deep reinforcement learning agents trained using NavACL in collision-free environments significantly outperform state-of-the-art agents trained with uniform sampling -the current standard. Furthermore, our agents are able to navigate through unknown cluttered indoor environments to semantically-specified targ… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(11 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…Based on the success of RL algorithms for solving challenging control tasks [13], [80], [213], [214], Zhang et al employ successor representation in learning to achieve quick adaptation. Morad et al [215] present an indoor object-driven navigation method named NavACL that uses automatic curriculum learning and is easily generalized to new environments and targets. Kahn et al [216] adopt multitask learning and off-policy RL learning to learn directly from real-world events.…”
Section: Methods For Real-world Navigationmentioning
confidence: 99%
“…Based on the success of RL algorithms for solving challenging control tasks [13], [80], [213], [214], Zhang et al employ successor representation in learning to achieve quick adaptation. Morad et al [215] present an indoor object-driven navigation method named NavACL that uses automatic curriculum learning and is easily generalized to new environments and targets. Kahn et al [216] adopt multitask learning and off-policy RL learning to learn directly from real-world events.…”
Section: Methods For Real-world Navigationmentioning
confidence: 99%
“…• Step 1 -Define State Type: When assessing an RL task, it is essential to comprehend the state that can be obtained from the surrounding environment. For instance, some navigation tasks simplify the environment's states using gridcell representations [68,73,93], where the agent has a limited and predetermined set of states, whereas in other tasks, the environment can have unlimited states [34,38,40]. Therefore, this steps involves a decision between limited vs. unlimited states.…”
Section: Problem Formulation and Algorithm Selectionmentioning
confidence: 99%
“…Bajcsy et al [38] make a convincing argument why any research on robot navigation necessarily has to include active perception, to quote their conclusion: "An agent is an active perceiver if it knows why it wishes to sense, and then chooses what to perceive, and determines how, when and where to achieve that perception". An active solution for autonomous automotive navigation has been proposed by Kendall et al [39], whereas an active approach for robot navigation was presented by Morad et al [40]. The addition of a social aspect is another dimension that is very hard to learn in a passive way: it is challenging to definitively decide on a "correct" motion in relation to humans since they react differently to the presence of the robot and have their own specific behaviours as well [41], [4], [42].…”
Section: Active Vs Passive Visionmentioning
confidence: 99%