2014 IEEE-RAS International Conference on Humanoid Robots 2014
DOI: 10.1109/humanoids.2014.7041491
|View full text |Cite
|
Sign up to set email alerts
|

3D stereo estimation and fully automated learning of eye-hand coordination in humanoid robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(27 citation statements)
references
References 25 publications
0
27
0
Order By: Relevance
“…Open-loop approach: before entering the visual servoing control loop, we use the iCub stereo vision to get a rough 3D localization of our target. In particular, we employ a Structure From Motion algorithm [30] to get such 3D point and then we move the right hand using an open loop control. Observe goal and end effector: in order to carry out visual tracking and visual servoing the robot has to observe both the target object and its end-effector.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…Open-loop approach: before entering the visual servoing control loop, we use the iCub stereo vision to get a rough 3D localization of our target. In particular, we employ a Structure From Motion algorithm [30] to get such 3D point and then we move the right hand using an open loop control. Observe goal and end effector: in order to carry out visual tracking and visual servoing the robot has to observe both the target object and its end-effector.…”
Section: A Experimental Setupmentioning
confidence: 99%
“…Although it seems that no markers are used, no description is present about how the end-effector pose is measured. In Fanello et al (2014), eyehand calibration is realized by performing several ellipsoidal arm movements with a predefined hand posture, tracking the tip of the index finger in the camera images. Optimization techniques are employed to learn the transformation between the fingertip position obtained by the stereo vision and the one computed from the forward kinematics.…”
Section: Related Workmentioning
confidence: 99%
“…However, the use of depth estimation as described above in real-world scenarios is hindered in the earlier methods 28,29 by the difficulty of computing realtime and robust disparity maps from moving stereo cameras. In these scenarios, the proposed depth estimation techniques help the robot to identify the ball and environment and prepare the robot for next action in the real-world settings.…”
Section: Trigonometric Depth Estimationmentioning
confidence: 99%
“…In these scenarios, the proposed depth estimation techniques help the robot to identify the ball and environment and prepare the robot for next action in the real-world settings. Regarding the estimation of the camera parameters, the procedure described in the studies by Fanello et al 28 and Ciliberto et al 29 was adapted except the calibration procedures. The proposed method is a viable as compared with articles referred in the studies by Pasquale et al 30 and Sadeh-Or and Kaminka 31 in dynamical conditions as these pose static constraints in real-world scenarios.…”
Section: Trigonometric Depth Estimationmentioning
confidence: 99%