2022
DOI: 10.48550/arxiv.2206.08853
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(32 citation statements)
references
References 0 publications
0
22
0
2
Order By: Relevance
“…We adopt the original observation space provided by MineDoJo [17], which includes a RGB camera-view, yaw/pitch angle, GPS location, and the type of 3 × 3 blocks surrounding the agent. We discretize the original multidiscrete action space provided by MineDojo into 42 discrete actions.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…We adopt the original observation space provided by MineDoJo [17], which includes a RGB camera-view, yaw/pitch angle, GPS location, and the type of 3 × 3 blocks surrounding the agent. We discretize the original multidiscrete action space provided by MineDojo into 42 discrete actions.…”
Section: Methodsmentioning
confidence: 99%
“…First, MineAgent [17] is an online RL algorithm that leverages pretrained state representations and dense reward functions to boost training. BC (VPT) [4], BC (CLIP) [17], and BC (I-CNN) [15] are variants of the behavior cloning algorithm that use different backbone models (indicated in the corresponding brackets) for state feature extraction. The backbones are finetuned with the BC loss (see Appendix A.3 for more details).…”
Section: Single Task Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…On the other hand, our simulator is more suitable for open-world exploration. Recently, MINEDOJO has been developed with thousands of diverse open-ended tasks [32]. With MINEDOJO's data, one can leverage large pre-trained video language models to learn reward functions and then guide agent learning in various tasks.…”
Section: Related Workmentioning
confidence: 99%