2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9560837
|View full text |Cite
|
Sign up to set email alerts
|

Dynamics Randomization Revisited: A Case Study for Quadrupedal Locomotion

Abstract: Understanding the gap between simulation and reality is critical for reinforcement learning with legged robots, which are largely trained in simulation. However, recent work has resulted in sometimes conflicting conclusions with regard to which factors are important for success, including the role of dynamics randomization. In this paper, we aim to provide clarity and understanding on the role of dynamics randomization in learning robust locomotion policies for the Laikago quadruped robot. Surprisingly, in con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(27 citation statements)
references
References 36 publications
(40 reference statements)
0
27
0
Order By: Relevance
“…Therefore, the resulting policies are not robust to changes in the dynamics and fail to solve the task in the real world. In contrast, sim2real methods extending RL with domain randomization [1][2][3][4] or adversarial disturbances [5][6][7][8] have shown the successful transfer to the physical world [9].…”
Section: Imentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the resulting policies are not robust to changes in the dynamics and fail to solve the task in the real world. In contrast, sim2real methods extending RL with domain randomization [1][2][3][4] or adversarial disturbances [5][6][7][8] have shown the successful transfer to the physical world [9].…”
Section: Imentioning
confidence: 99%
“…In robotics, domain randomization is most widely used to achieve successful sim2real transfer. For example domain randomization was used for in-hand manipulation [46], ball-in-a-cup [2], locomotion [9], manipulation [3,4].…”
Section: R Wmentioning
confidence: 99%
“…However, these methods require in-depth knowledge about the environment and substantial manual efforts for parameter tuning. As an alternative, reinforcement learning provides an autonomous learning paradigm for legged locomotion skills from self-exploration in complex environments [19], [21], [27], [31], [38], [45], [47], [55], [62]. Despite the successful application of RL on legged robots, most RL approaches depend only on proprioceptive input.…”
Section: Related Workmentioning
confidence: 99%
“…However, only given proprioceptive information, the blind RL controller addresses challenging scenarios by training with large-scale randomized environment parameters [31], [62]. While this technique delivers promising results for maneuvering on uneven and unknown-material ground, it's insufficient for more complicated tasks like avoiding obstacles that are hard to step over, or estimating accurate foot placement positions for safety.…”
Section: Introductionmentioning
confidence: 99%
“…It expands the reach of robots and enables them to solve a wide range of tasks, from daily life delivery to planetary exploration in challenging, uneven terrain [16,2]. Recently, besides the success of Deep Reinforcement Learning (RL) in navigation [56,27,86,42] and robotic manipulation [49,48,78,40], we have also witnessed the tremendous improvement of locomotion skills for quadruped robots, allowing them to walk on uneven terrain [85,84], and even generalize to real-world with mud, snow, and running water [46]. While these results are encouraging, most RL approaches focus on learning a robust controller for blind quadrupedal locomotion, using only the proprioceptive measurements as inputs.…”
Section: Introductionmentioning
confidence: 99%