2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793805
|View full text |Cite
|
Sign up to set email alerts
|

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

Abstract: Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present. Where other approaches use a static camera position or fixed data collection routines, our Multi-View Picking (MVP) controller uses an active perception approach to choose informative viewpoints based directly on a distribution of grasp pose estimates in real time, reducing uncertainty in the grasp poses caused by clutter and occlusions. In trials of grasping 20 objects from clu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
2
3

Year Published

2019
2019
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(53 citation statements)
references
References 24 publications
(44 reference statements)
0
48
2
3
Order By: Relevance
“…Those studies are included as they also discussed the attributes of an ideal viewpoint. There were 3 studies out of 49 where the view was of action performed by a robot but the view was not external (all these studies selected the best viewpoints for grasping where the camera was mounted on the arm) [45,46,47]. There were 2 studies out of 49 where the subject of the external view was action but it was not performed by a robot but rather a computer player [48] or human [49].…”
Section: Studies That Discussed Attributes Of Ideal Viewpointmentioning
confidence: 99%
See 1 more Smart Citation
“…Those studies are included as they also discussed the attributes of an ideal viewpoint. There were 3 studies out of 49 where the view was of action performed by a robot but the view was not external (all these studies selected the best viewpoints for grasping where the camera was mounted on the arm) [45,46,47]. There were 2 studies out of 49 where the subject of the external view was action but it was not performed by a robot but rather a computer player [48] or human [49].…”
Section: Studies That Discussed Attributes Of Ideal Viewpointmentioning
confidence: 99%
“…The fifth category was human factors consisting of psychomotor aspects [38,39,40,41], sketchability (ability to draw a sketch of an object) [54,66,67], aesthetics [72,73,56], and preference [58,62]. The sixth category was other ad hoc attributes containing alignment with geometrical task model [31,32,33,34,35,36,20], similarity to example images [69], human trackability [49], and graspability [45,47].…”
Section: Attributes Of Ideal Viewpointmentioning
confidence: 99%
“…Recently, deep-RL has been used in various robotic applications [40], such as placement [55], grasping objects mixed with towels [56], grasping deformable objects [57] and grasping in cluttered scenes [58], [59]. The grasping task in clutter has been intensively examined in numerous studies [60]- [63]. Deep-RL has led to advanced technologies by using visual and tactile features, particularly in robotic grasping [64].…”
Section: A Graspingmentioning
confidence: 99%
“…In grasp detection research the focus lies on finding the correct position and orientation to make robust and accurate grasps for a given object. In [12,13,14,15] models are trained on readily available labeled datasets, with printable 3D-objects [16,17], 2D-images [18] or real life benchmark objects [19,20]. While this tends to create good grasping points for known objects, it is challenging and time consuming to create or to extend to new objects: each dataset sample needs to be labeled with the position, orientation and in most cases even the height and width of the grasp location.…”
Section: Grasp Detection and Planningmentioning
confidence: 99%