2022
DOI: 10.1126/scirobotics.aax8177
|View full text |Cite
|
Sign up to set email alerts
|

Efficient multitask learning with an embodied predictive model for door opening and entry with whole-body control

Abstract: Robots need robust models to effectively perform tasks that humans do on a daily basis. These models often require substantial developmental costs to maintain because they need to be adjusted and adapted over time. Deep reinforcement learning is a powerful approach for acquiring complex real-world models because there is no need for a human to design the model manually. Furthermore, a robot can establish new motions and optimal trajectories that may not have been considered by a human. However, the cost of lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 47 publications
(22 citation statements)
references
References 60 publications
(56 reference statements)
0
17
0
Order By: Relevance
“…The inputs are image data i t img and robot joint angle data i t joint at time step t, and the output is the prediction of next joint angles o t+1 joint . The model generates robot motions by repeating the sequence of predicting the next joint angles and applying them to the robot [7], [14], [8].…”
Section: Proposed Methods a Design Conceptmentioning
confidence: 99%
See 4 more Smart Citations
“…The inputs are image data i t img and robot joint angle data i t joint at time step t, and the output is the prediction of next joint angles o t+1 joint . The model generates robot motions by repeating the sequence of predicting the next joint angles and applying them to the robot [7], [14], [8].…”
Section: Proposed Methods a Design Conceptmentioning
confidence: 99%
“…By contrast, end-to-end learned models jointly train vision models along with robot control models. This makes the vision models, such as widely used vanilla fully convolutional networks (FCN), to extract task-oriented features even from untrained objects [7], [14], [8]. However, FCNs lack in spatial generalization ability [15], and cannot extract data at untrained visual areas.…”
Section: Related Work a Visual Perception In Robot Controlmentioning
confidence: 99%
See 3 more Smart Citations