2018 IEEE International Conference on Robotics and Automation (ICRA) 2018
DOI: 10.1109/icra.2018.8461196
|View full text |Cite
|
Sign up to set email alerts
|

Deep Object-Centric Representations for Generalizable Robot Learning

Abstract: Robotic manipulation in complex open-world scenarios requires both reliable physical manipulation skills and effective and generalizable perception. In this paper, we propose a method where general purpose pretrained visual models serve as an object-centric prior for the perception system of a learned policy. We devise an object-level attentional mechanism that can be used to determine relevant objects from a few trajectories or demonstrations, and then immediately incorporate those objects into a learned poli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
45
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 64 publications
(45 citation statements)
references
References 26 publications
0
45
0
Order By: Relevance
“…As a consequence, the robot can become stuck performing the same trajectory over and over again when the performed movement does not significantly change the dirt configuration: this typically happens when the table is almost cleaned, as discussed in the previous section Implementing a stopping criteria based on the detection of a clean table could be a straightforward solution for this problem. Moreover, a direction for further research that can alleviate this issue is to use a deep reinforcement learning approach [8], where the robot can learn from trial and error how to clean the table, based on a set of image features provided by a deep neural network.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…As a consequence, the robot can become stuck performing the same trajectory over and over again when the performed movement does not significantly change the dirt configuration: this typically happens when the table is almost cleaned, as discussed in the previous section Implementing a stopping criteria based on the detection of a clean table could be a straightforward solution for this problem. Moreover, a direction for further research that can alleviate this issue is to use a deep reinforcement learning approach [8], where the robot can learn from trial and error how to clean the table, based on a set of image features provided by a deep neural network.…”
Section: Discussionmentioning
confidence: 99%
“…The EM training procedure finds, in the E-Step, responsibilities γ n,i = π i p(ξ n |i) K k=1 π k p(ξ n |k) (9) and uses these values to update estimates for parameters Z µ i,j , Z Σ i,j and π i in the M-Step (for more details please refer to [3]). After learning, given a new set of frames of reference X j = {A j , b j }, provided by the neural network from the test image, a trajectory T n is generated in the task space by conditioning the distribution p(ξ|·) on the time variable t n , using (5), (7) and (8).…”
Section: Task Parameterized Gaussian Mixture Model and Gaussian Mixtumentioning
confidence: 99%
See 2 more Smart Citations
“…It might in theory be possible to extend those algorithms to model combined sensorimotor sequences. However existing machine learning methods for robot sensorimotor learning (Pinto et al, 2016;Devin et al, 2017) are significantly different from machine learning techniques for temporal sequences (Waibel, 1989;Hochreiter and Schmidhuber, 1997;Fine et al, 1998). A detailed comparison of sequence learning techniques to our model can be found in (Cui et al, 2016;Hawkins and Ahmad, 2016).…”
Section: Models Of Sequence Memorymentioning
confidence: 99%