2016
DOI: 10.1109/tro.2016.2535443
|View full text |Cite
|
Sign up to set email alerts
|

Orthogonal Image Features for Visual Servoing of a 6-DOF Manipulator With Uncalibrated Stereo Cameras

Abstract: Abstract-We present an approach to control a 6 DOF manipulator using an uncalibrated visual servoing (VS) approach that addresses the challenges of choosing proper image features for target objects and designing a VS controller to enhance the tracking performance. The main contribution of this article is the definition of a new virtual visual space (image space). A novel stereo camera model employing virtual orthogonal cameras is used to map 6D poses from Cartesian space to this virtual visual space. Each comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 44 publications
(22 citation statements)
references
References 25 publications
0
22
0
Order By: Relevance
“…In total, we refer to the 45 papers revised above. These include the works with only one sensor, discussed in section 3 (Maeda et al, 2001 ; Kumon et al, 2003 , 2005 ; De Santis et al, 2007 ; Dune et al, 2008 ; Suphi Erden and Tomiyama, 2010 ; Suphi Erden and Maric, 2011 ; Tsui et al, 2011 ; Bussy et al, 2012 ; Flacco et al, 2012 ; Youssef et al, 2012 ; Ficuciello et al, 2013 ; Schlegl et al, 2013 ; Agustinos et al, 2014 ; Baumeyer et al, 2015 ; Gridseth et al, 2015 , 2016 ; Magassouba et al, 2015 , 2016a , b , c ; Wang et al, 2015 ; Bauzano et al, 2016 ; Cai et al, 2016 ; Leboutet et al, 2016 ; Narayanan et al, 2016 ; Bergner et al, 2017 ; Cortesao and Dominici, 2017 ; Dean-Leon et al, 2017 ) and those which integrated multiple sensors, discussed in section 4 (Huang et al, 1999 ; Okuno et al, 2001 , 2004 ; Natale et al, 2002 ; Hornstein et al, 2006 ; Pomares et al, 2011 ; Chan et al, 2012 ; Cherubini and Chaumette, 2013 ; Cherubini et al, 2014 , 2015 , 2016 ; Navarro et al, 2014 ; Papageorgiou et al, 2014 ; Dean-Leon et al, 2016 ; Chatelain et al, 2017 ). The five criteria are: sensor(s), integration method (when multiple sensors are used), control objective, target sector, and robot platform.…”
Section: Classification Of Work and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In total, we refer to the 45 papers revised above. These include the works with only one sensor, discussed in section 3 (Maeda et al, 2001 ; Kumon et al, 2003 , 2005 ; De Santis et al, 2007 ; Dune et al, 2008 ; Suphi Erden and Tomiyama, 2010 ; Suphi Erden and Maric, 2011 ; Tsui et al, 2011 ; Bussy et al, 2012 ; Flacco et al, 2012 ; Youssef et al, 2012 ; Ficuciello et al, 2013 ; Schlegl et al, 2013 ; Agustinos et al, 2014 ; Baumeyer et al, 2015 ; Gridseth et al, 2015 , 2016 ; Magassouba et al, 2015 , 2016a , b , c ; Wang et al, 2015 ; Bauzano et al, 2016 ; Cai et al, 2016 ; Leboutet et al, 2016 ; Narayanan et al, 2016 ; Bergner et al, 2017 ; Cortesao and Dominici, 2017 ; Dean-Leon et al, 2017 ) and those which integrated multiple sensors, discussed in section 4 (Huang et al, 1999 ; Okuno et al, 2001 , 2004 ; Natale et al, 2002 ; Hornstein et al, 2006 ; Pomares et al, 2011 ; Chan et al, 2012 ; Cherubini and Chaumette, 2013 ; Cherubini et al, 2014 , 2015 , 2016 ; Navarro et al, 2014 ; Papageorgiou et al, 2014 ; Dean-Leon et al, 2016 ; Chatelain et al, 2017 ). The five criteria are: sensor(s), integration method (when multiple sensors are used), control objective, target sector, and robot platform.…”
Section: Classification Of Work and Discussionmentioning
confidence: 99%
“…Humans generally use vision to teach the robot relevant configurations for collaborative tasks. For example, Cai et al ( 2016 ) demonstrate an application where a human operator used a QR code to specify the target poses for a 6 degrees-of-freedom (dof) robot arm. In Gridseth et al ( 2016 ), the user provided target tasks via a tablet-like interface that sent the robot the desired reference view; here, the human specified various motions such as point-to-point, line-to-line, etc., that were automatically performed via visual feedback.…”
Section: Sensor-based Controlmentioning
confidence: 99%
“…Visual servo control technology has attracted increasing attention in robotics due to its high efficiency and accuracy. [23][24][25][26] In this section, a visual servo controller is presented to drive the 2-DOF manipulator to track the preselected target object. The image-based visual servo control method is applied in the article, due to its advantages.…”
Section: Visual Servoing-based Object Trackingmentioning
confidence: 99%
“…Taryudi et al [22] used a stereo camera for the detection bottle cap by eye-to-hand manipulator configuration. Cai et al [23] used the same configuration and proposed a detection method for obstacle avoidance in six-dimension (6-D) poses using stereo cameras. Chen et al [24] combined the geometry constraint with the epipolar constraint to achieve 3-D recovery of the fiber optic in compact eye-to-hand manipulator environment.…”
Section: Introductionmentioning
confidence: 99%