Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022
DOI: 10.48550/arxiv.2203.01983
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Implicit Kinematic Policies: Unifying Joint and Cartesian Action Spaces in End-to-End Robot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…We now evaluate procedure cloning in larger scale settings on tasks including simulated robotic navigation [82] and manipulation [83,14], and learning to play MinAtar [84] (a miniature version of Atari [85]). Procedure cloning exhibits significant generalization to previously unseen maze layouts, positions of objects being manipulated, and environment configurations such as transition stochasticity and game difficulty in each of the tasks, respectively.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…We now evaluate procedure cloning in larger scale settings on tasks including simulated robotic navigation [82] and manipulation [83,14], and learning to play MinAtar [84] (a miniature version of Atari [85]). Procedure cloning exhibits significant generalization to previously unseen maze layouts, positions of objects being manipulated, and environment configurations such as transition stochasticity and game difficulty in each of the tasks, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Task description. The bimanual sweeping task [83,14] requires two 7-DoF robot arms equipped with spatula-like end-effectors to sweep a pile of particles evenly into two bowls while avoiding dropping particles between the tips of the spatulas. The scripted oracle for collecting expert trajectories uses access to privileged information including object poses and contact points, which are not accessible at test time.…”
Section: Evaluating Image-based Robot Manipulationmentioning
confidence: 99%
See 1 more Smart Citation
“…CaP exhibit a degree of cross-embodiment support [59], [60] by performing the same task differently depending on the action APIs. In the example below, we give Hints of the action APIs, and the resultant plan changes depending on the whether or not the robot is omnidirectional or unidirectional.…”
Section: Cross Embodiment Examplementioning
confidence: 99%
“…Specifically, given a list of obstacle categories described with natural language, we can localize those obstacles at runtime to generate a binary map for collision avoidance and/or shortest path planning. A prominent use case for this is sharing a VLMap of the same environment between different robots with different embodiments (i.e., cross-embodiment problem [36], [37]), which may be useful for multi-agent coordination [38]. For example, a large mobile robot may need to navigate around a table (or other large furniture), while a drone can directly fly over it.…”
Section: Generating Open-vocabulary Obstacle Mapsmentioning
confidence: 99%