2011 IEEE International Conference on Robotics and Automation 2011
DOI: 10.1109/icra.2011.5980495
|View full text |Cite
|
Sign up to set email alerts
|

Search in the real world: Active visual object search based on spatial relations

Abstract: Abstract-Objects are integral to a robot's understanding of space. Various tasks such as semantic mapping, pick-andcarry missions or manipulation involve interaction with objects. Previous work in the field largely builds on the assumption that the object in question starts out within the ready sensory reach of the robot. In this work we aim to relax this assumption by providing the means to perform robust and large-scale active visual object search. Presenting spatial relations that describe topological relat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
63
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(69 citation statements)
references
References 15 publications
0
63
0
Order By: Relevance
“…Work done by Aydemir, Sjöö and others in the CogX project ( [7], [13], [8]) introduced the sampling-based approach using object location probability distributions. This approach provides and effective and flexible approach to active visual search which is not restricted by the complexity of optimal object search in the general case [3].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Work done by Aydemir, Sjöö and others in the CogX project ( [7], [13], [8]) introduced the sampling-based approach using object location probability distributions. This approach provides and effective and flexible approach to active visual search which is not restricted by the complexity of optimal object search in the general case [3].…”
Section: Related Workmentioning
confidence: 99%
“…We answer these questions on the basis of QSRs. Starting from the approach of Aydemir et al [7], we assume that the robot has the following information at the beginning of the search: a 2D map, and a 3D map. To this we add the assumption that the robot also knows the poses of a set of known landmark objects, and a set of QSRs between them and other objects.…”
Section: Introductionmentioning
confidence: 99%
“…In this work, more than 50.000 labeled images provided by LabelMe [13] are used to calculate the probabilities in (1). A robot can refine these general relations by updating the probabilities based on observed instances in its own environment and using (1). This way the general set of object-object relations changes according to the specifics of the robot's environment.…”
Section: Learning Object Relationsmentioning
confidence: 99%
“…In [1] the focus is on organizing an efficient search given information about spatial relations between objects in an environment, e.g., 'the book is on the table in room 1'. A decision theoretic strategy selection method is adopted for finding the object using the relations.…”
Section: Introductionmentioning
confidence: 99%
“…It allows communication in a human-friendly way. Further, semantic place information has the potential to facilitate other functions such as mapping [1,2], behavior-based navigation [3], task planning [4] and active object search and rescue [5,6] in an efficient way. Therefore, research on place classification has been an important step in the quest for intelligent human robot interactions.…”
Section: Introductionmentioning
confidence: 99%