2022
DOI: 10.1109/lra.2021.3129139
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Structured Policies and Policy Optimization for Real-World Dexterous Object Manipulation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 28 publications
0
13
0
Order By: Relevance
“…We evaluate our policy remotely on the TriFinger system [4] provided by the organisers of the real robot challenge [7]. The cube is tracked on the system using 3 cameras, described in [28].…”
Section: Policy Inference On Remote Real Robotmentioning
confidence: 99%
See 3 more Smart Citations
“…We evaluate our policy remotely on the TriFinger system [4] provided by the organisers of the real robot challenge [7]. The cube is tracked on the system using 3 cameras, described in [28].…”
Section: Policy Inference On Remote Real Robotmentioning
confidence: 99%
“…The aim in our 6-DoF manipulation task is to get the position and orientation of the cube to a specified goal position and orientation. We define our metric for 'success' in this task as getting the position within 2 cm, and orientation within 22°of the target goal pose as used in [3]; comparable to mean results obtained in [7]. Following previous works dealing with similar tasks [3,13,20], we attempted applying a reward based on the position and orientation components of error individually.…”
Section: Experiments 1: Trainingmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep reinforcement learning (RL) has witnessed remarkable progress over the last years, particularly in domains such as video games or other synthetic toy settings [1][2][3]. On the other hand, applying deep RL on real-world grounded robotic setup such as learning seemingly simple dexterous manipulation tasks in multi-object settings is still confronted with many fundamental limitations being the focus of many recent works [4][5][6][7][8][9][10][11]. The reinforcement learning problem in robotics setups is much more challenging [12].…”
Section: Introductionmentioning
confidence: 99%