Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022 International Conference on Robotics and Automation (ICRA) 2022
DOI: 10.1109/icra46639.2022.9811967
|View full text |Cite
|
Sign up to set email alerts
|

OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

Abstract: Learning performant robot manipulation policies can be challenging due to high-dimensional continuous actions and complex physics-based dynamics. This can be alleviated through intelligent choice of action space. Operational Space Control (OSC) has been used as an effective taskspace controller for manipulation. Nonetheless, its strength depends on the underlying modeling fidelity, and is prone to failure when there are modeling errors. In this work, we propose OSC for Adaptation and Robustness (OSCAR), a data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…State representation [24], [53], [86], [87], [97] [22], [55], [138] Reward design [38], [119] [18] [93] [135] [23], [31], [33], [54] Abstract learning [27] [106], [107] [3], [16], [82], [134], [136] Offline RL [26] [1], [20], [39], [63], [116], [133], [140] Parallel learning [48], [114] [11], [32], [44], [58], [79], [80], [88], [113] Learning from demonstration [7], [35] [19]…”
Section: Guided Rl Methods Sourcementioning
confidence: 99%
See 3 more Smart Citations
“…State representation [24], [53], [86], [87], [97] [22], [55], [138] Reward design [38], [119] [18] [93] [135] [23], [31], [33], [54] Abstract learning [27] [106], [107] [3], [16], [82], [134], [136] Offline RL [26] [1], [20], [39], [63], [116], [133], [140] Parallel learning [48], [114] [11], [32], [44], [58], [79], [80], [88], [113] Learning from demonstration [7], [35] [19]…”
Section: Guided Rl Methods Sourcementioning
confidence: 99%
“…For instance, Martin-Martin et al [82] introduce variable impedance control in the end-effector space to simplify exploration and improve robustness to disturbances. Wong et al [136] introduce Operational Space Control for Adaption and Robustness, a data-driven version of operational space control [59] that is adaptive to changes in the dynamics of a manipulation setting. Bogdanovic et al [16] propose a policy learning the impedance and desired position in the joint space and compare this approach to torque control and a fixed gain proportional-derivative controller.…”
Section: Abstract Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…While modelbased learning has been shown to work well in some complex dynamic environments [26], model-free methods remain a popular choice in the dynamic locomotion community [16], [17]. Others have turned to embedded and more descriptive action spaces [27], [28], [29], [30] and reduced order models [23] to enable more robust and sample efficient learning. However, these efforts have mainly ignored the impact of observation space compression on model-free learning.…”
Section: Related Workmentioning
confidence: 99%