2018
DOI: 10.1109/lra.2018.2792536
|View full text |Cite
|
Sign up to set email alerts
|

Per-Contact Iteration Method for Solving Contact Dynamics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
150
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 177 publications
(151 citation statements)
references
References 17 publications
1
150
0
Order By: Relevance
“…We expect computation time to grow quadratically with the number of contact points since the Delassus matrix (G) size depends quadratically on the contact points. This conjecture is confirmed by fitting a quadratic model to the recorded times and reproduces the result of state-of-the-art simulators [34]. Our method is hence well suited for tasks that require many contact interactions.…”
Section: B Scalability With Respect To the Number Of Contact Pointssupporting
confidence: 72%
“…We expect computation time to grow quadratically with the number of contact points since the Delassus matrix (G) size depends quadratically on the contact points. This conjecture is confirmed by fitting a quadratic model to the recorded times and reproduces the result of state-of-the-art simulators [34]. Our method is hence well suited for tasks that require many contact interactions.…”
Section: B Scalability With Respect To the Number Of Contact Pointssupporting
confidence: 72%
“…One of the biggest challenges with walking robots is the dynamics at intermittent contacts. To this end, we utilize the rigid body contact solver presented in our previous work [41]. This contact solver employs a hard contact model that fully respects the Coulomb friction cone constraint.…”
Section: Modeling Rigid-body Dynamicsmentioning
confidence: 99%
“…We used a fast custom implementation of the algorithm [55]. This efficient implementation and fast rigid-body simulation [41] allowed us to generate and process about a quarter of a billion state transitions in roughly four hours. A learning session terminates if the average performance of the policy does not improve by more than a task-specific threshold within 300 TRPO iterations.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…We realized the MDP environment for the GP using an own implementation of CROC in C++, while for the MDP environment of the GC we used the RaiSim [20] multi-body physics engine. All RL algorithms were implemented using the TensorFlow 3 C/C++ API.…”
Section: A Experimental Setupmentioning
confidence: 99%