2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00586
|View full text |Cite
|
Sign up to set email alerts
|

Visual Room Rearrangement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(30 citation statements)
references
References 32 publications
0
30
0
Order By: Relevance
“…To train policies, we use the AI2-iTHOR [32] simulator where agents can navigate: A N = {move forward, turn left/right 90 • , look up/down 30 • }, and interact with objects: A I = {take, put, open, close, toggle-on, toggle-off, slice}. Our action space of size |A|= 110 is the union of all navigation actions and valid object interactions following [62]. We use all 30 kitchen scenes from AI2-iTHOR, split into training (25) and testing (5) sets.…”
Section: Methodsmentioning
confidence: 99%
“…To train policies, we use the AI2-iTHOR [32] simulator where agents can navigate: A N = {move forward, turn left/right 90 • , look up/down 30 • }, and interact with objects: A I = {take, put, open, close, toggle-on, toggle-off, slice}. Our action space of size |A|= 110 is the union of all navigation actions and valid object interactions following [62]. We use all 30 kitchen scenes from AI2-iTHOR, split into training (25) and testing (5) sets.…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, not many datasets provide information about object associations between two recordings. The datasets can be separated into synthetic frames-wise annotated datasets ( Park et al, 2021 ; Weihs et al, 2021 ) and real-world datasets where the 3D map is annotated ( Wald et al, 2019 ; Langer et al, 2020 ). Based on the task definitions from Batra et al( Batra et al, 2020 ), Weihs et al ( Weihs et al, 2021 ) introduced a new dataset with object rearrangements in a virtual environment for studying how robots explore their environment.…”
Section: Related Workmentioning
confidence: 99%
“…This does not represent the real world that must consider objects that appear, and are therefore unknown, or disappear. Today most approaches assume a given and fixed set of objects, e.g., Bore et al (2018 ) and Weihs et al (2021 ). To develop more general methods, the task of open-world object detection is recently defined by Joseph et al (2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…[52] tackle 'interactive navigation', where the robot can bump into and push objects during navigation, but does not have an arm. Some works [56][57][58] abstract away gross motor control entirely by using symbolic interaction capabilities (e.g. a 'pick up X' action) or a 'magic pointer' [9].…”
Section: Related Workmentioning
confidence: 99%