2009
DOI: 10.1142/s0219843609001796
|View full text |Cite
|
Sign up to set email alerts
|

Towards Grasp-Oriented Visual Perception for Humanoid Robots

Abstract: A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated.In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 73 publications
0
5
0
Order By: Relevance
“…Recent work in active vision by Tsotsos & Shubina (2007) and Bohg et al (2009), the former for target search and the latter for object grasping, contrary to our solution, use an explicit representation for objects to implement active perception. On the other hand, several solutions for target applications similar to ours avoid explicit object representation by resorting to a bottom-up saliency approach such as defined by Itti et al (1998) – examples of these would be Shibata, Vijayakumar, Conradt, & Schaal (2001), Breazeal, Edsinger, Fitzpatrick, & Scassellati (2001) and Dankers, Barnes, & Zelinsky (2007).…”
Section: Overall Goals and Related Workmentioning
confidence: 93%
“…Recent work in active vision by Tsotsos & Shubina (2007) and Bohg et al (2009), the former for target search and the latter for object grasping, contrary to our solution, use an explicit representation for objects to implement active perception. On the other hand, several solutions for target applications similar to ours avoid explicit object representation by resorting to a bottom-up saliency approach such as defined by Itti et al (1998) – examples of these would be Shibata, Vijayakumar, Conradt, & Schaal (2001), Breazeal, Edsinger, Fitzpatrick, & Scassellati (2001) and Dankers, Barnes, & Zelinsky (2007).…”
Section: Overall Goals and Related Workmentioning
confidence: 93%
“…In Bohg et al,. [ 150 ] a visual–tactile cooperative control framework was proposed, which processed these two kinds of features separately and added them into the feedback control loop to plan the operation together.…”
Section: Vision‐ and Touch‐enabled Humanoidsmentioning
confidence: 99%
“…In order to obtain fully autonomous robots an accurate localization of the robot in the world is much more than desirable. Furthermore, if we can obtain an accurate localization in real-time, we can use the remaining computational resources to perform other important humanoid robotic tasks such as planning (Perrin et al, 2010), 3D object modeling (Foissote et al, 2010) or visual perception (Bohg et al, 2009).…”
Section: Introductionmentioning
confidence: 99%