2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids) 2016
DOI: 10.1109/humanoids.2016.7803394
|View full text |Cite
|
Sign up to set email alerts
|

Eye in hand: Towards GPU accelerated online grasp planning based on pointclouds from in-hand sensor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…The approach integrates both shape and appearance information into an articulated ICP approach to track the robot's manipulator and the object while improving the 3D model of the object. Similarly, another work [70] attaches a depth sensor to a robotic hand and plans grasps directly in the sensed voxel grid. These approaches improve their models of the object using only a single sensory modality but from multiple points in time.…”
Section: Robotic Visual Shape Understandingmentioning
confidence: 99%
“…The approach integrates both shape and appearance information into an articulated ICP approach to track the robot's manipulator and the object while improving the 3D model of the object. Similarly, another work [70] attaches a depth sensor to a robotic hand and plans grasps directly in the sensed voxel grid. These approaches improve their models of the object using only a single sensory modality but from multiple points in time.…”
Section: Robotic Visual Shape Understandingmentioning
confidence: 99%
“…Other shape completion systems exist for household objects but not robotic grasping [25], [26]. Several geometric solutions to object 3D modeling have been proposed as well [27], [28], [29], [30].…”
Section: Related Workmentioning
confidence: 99%
“…The approach integrates both shape and appearance information into an articulated ICP approach to track the robot's manipulator and the object while improving the 3D model of the object. Similarly, another work [20] attaches a depth sensor to a robotic hand and plans grasps directly in the sensed voxel grid. These approaches improve their models of the object using only a single sensory modality but from multiple points in time.…”
Section: Related Workmentioning
confidence: 99%