2019
DOI: 10.1109/lra.2019.2930426
|View full text |Cite
|
Sign up to set email alerts
|

GAPLE: Generalizable Approaching Policy LEarning for Robotic Object Searching in Indoor Environment

Abstract: We study the problem of learning a generalizable action policy for an intelligent agent to actively approach an object of interest in an indoor environment solely from its visual inputs. While scene-driven or recognition-driven visual navigation has been widely studied, prior efforts suffer severely from the limited generalization capability. In this paper, we first argue the object searching task is environment dependent while the approaching ability is general. To learn a generalizable approaching policy, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
33
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(33 citation statements)
references
References 28 publications
0
33
0
Order By: Relevance
“…With the rise of deep learning techniques, some works apply more complex methods to find objects. In [17] and [18] the search problem is approached through a deep reinforcement learning model that learns action policies to reach the target object. In [17] the target object has been previously seen and the semantic segmentation and the depth information are computed in each robot observation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…With the rise of deep learning techniques, some works apply more complex methods to find objects. In [17] and [18] the search problem is approached through a deep reinforcement learning model that learns action policies to reach the target object. In [17] the target object has been previously seen and the semantic segmentation and the depth information are computed in each robot observation.…”
Section: Related Workmentioning
confidence: 99%
“…In [17] and [18] the search problem is approached through a deep reinforcement learning model that learns action policies to reach the target object. In [17] the target object has been previously seen and the semantic segmentation and the depth information are computed in each robot observation. Druon et al [19] propose a visual navigation method based on the context information of previously detected objects to calculate the similarity to the target object.…”
Section: Related Workmentioning
confidence: 99%
“…With the rise of deep learning techniques, some works apply more complex methods to find objects. In the approaches of Ye et al [188,189], the search problem is addressed through a deep reinforcement learning model that learns action policies to reach the target object. In [188] the target object has been previously seen and the semantic segmentation and the depth information are computed in each robot observation.…”
Section: Related Workmentioning
confidence: 99%
“…In the approaches of Ye et al [188,189], the search problem is addressed through a deep reinforcement learning model that learns action policies to reach the target object. In [188] the target object has been previously seen and the semantic segmentation and the depth information are computed in each robot observation. Druon et al [27] propose a visual navigation method based on previously detected objects' context information to calculate the similarity to the target object.…”
Section: Related Workmentioning
confidence: 99%
“…Latest reinforcement learning studies on these tasks mainly focus on prior knowledge about targets [27,30,8]. Other studies focus on learning representation on the environment [46] or about the targets only implicitly [44]. These methods lead to insufficient understanding of the goal, and sample-inefficiency and generalization problems arise.…”
Section: Introductionmentioning
confidence: 99%