2020
DOI: 10.48550/arxiv.2011.00401
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The MAGICAL Benchmark for Robust Imitation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 10 publications
0
5
0
Order By: Relevance
“…Environments and training data. We evaluate on ten tasks taken from three benchmark domains: DMC [8], Procgen [7], and MAGICAL [6]. Here we briefly explain our choice of tasks and datasets; for more detailed information (e.g.…”
Section: Experiments Setupmentioning
confidence: 99%
See 2 more Smart Citations
“…Environments and training data. We evaluate on ten tasks taken from three benchmark domains: DMC [8], Procgen [7], and MAGICAL [6]. Here we briefly explain our choice of tasks and datasets; for more detailed information (e.g.…”
Section: Experiments Setupmentioning
confidence: 99%
“…For each MAGICAL environment, we used a fixed subset of five demonstration trajectories (initially selected at random) from the human dataset provided with the benchmark [6]. We used egocentric views with a frame stack of four and no action repeat.…”
Section: Hyperparametermentioning
confidence: 99%
See 1 more Smart Citation
“…We introduce a cross-embodiment imitation learning benchmark, X-MAGICAL, which is based on the imitation learning benchmark MAGICAL [33], implemented on top of the physics engine PyMunk [34].…”
Section: X-magical Benchmarkmentioning
confidence: 99%
“…Recently, with the growing interest of the research community for IL, LfD or ORL, a lot of datasets have been released. They mainly focus on robotics [15,44,48,40], some with a particular focus on human data [31,32,39]. Some works include datasets for discrete action-environments like games [19,28].…”
Section: Related Workmentioning
confidence: 99%