2021
DOI: 10.48550/arxiv.2104.09771
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with a Centroidal Model

Abstract: Model-free reinforcement learning (RL) for legged locomotion commonly relies on a physics simulator that can accurately predict the behaviors of every degree of freedom of the robot. In contrast, approximate reduced-order models are often sufficient for many model-based control strategies. In this work we explore how RL can be effectively used with a centroidal model to generate robust control policies for quadrupedal locomotion. Advantages over RL with a full-order model include a simple reward structure, red… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 44 publications
0
10
0
Order By: Relevance
“…However, these low-dimensional models depend on a human-designed reference motion, which limits their application to a few gaits which are robotic and not naturallooking. Other works combine model-based controllers with reinforcement learning [24], [25] to generate desired CoM acceleration, or use learned dynamics models [26] for planning. In this work, we adapt the model-based controller from [2] and use animal motion trajectories as the reference motion to generate diverse, agile, natural motions on an A1 quadrupedal robot.…”
Section: B Model-based Legged Locomotion Controlmentioning
confidence: 99%
See 1 more Smart Citation
“…However, these low-dimensional models depend on a human-designed reference motion, which limits their application to a few gaits which are robotic and not naturallooking. Other works combine model-based controllers with reinforcement learning [24], [25] to generate desired CoM acceleration, or use learned dynamics models [26] for planning. In this work, we adapt the model-based controller from [2] and use animal motion trajectories as the reference motion to generate diverse, agile, natural motions on an A1 quadrupedal robot.…”
Section: B Model-based Legged Locomotion Controlmentioning
confidence: 99%
“…In this work, we adapt the model-based controller from [2] and use animal motion trajectories as the reference motion to generate diverse, agile, natural motions on an A1 quadrupedal robot. Instead of using RL like [24], we use trajectory optimization to improve the performance of the model-based controller in simulation, and transfer optimized reference trajectory to the real robot.…”
Section: B Model-based Legged Locomotion Controlmentioning
confidence: 99%
“…Unlike our approach, RLOC still uses a WBC which may limit the types of movements that can be selected by the perceptive foothold setting policy. GLiDE [18] learns an RL policy for the centroidal dynamics directly. This system was shown to traverse narrow beams and small stepping stones.…”
Section: B Combining Rl and Planning For Locomotionmentioning
confidence: 99%
“…Alternatively, researchers [4], [19] have investigated learning policies directly from realworld experience, which can intrinsically overcome sim-toreal gaps. Sample efficiency is a critical challenge for deep RL approaches, which can be improved by leveraging modelbased control strategies [17], [20], [21]. Recently, Lee et al [22] employed a teacher-student framework for training their agent.…”
Section: Related Workmentioning
confidence: 99%