2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793972
|View full text |Cite
|
Sign up to set email alerts
|

Robust object grasping in clutter via singulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 37 publications
(32 citation statements)
references
References 13 publications
0
24
0
Order By: Relevance
“…Finally, to produce more optimal policies, we take consideration of the workspace limits and their nature (open or constraining walls), which are ignored in [21]. Kiatos et al [22] employ a pushing policy with discrete actions that is learned via Q-learning in order to singulate a target object from each surrounding clutter using depth features that approximate the topography of the scene. However, they made restricted assumptions for the height of the obstacles and the properties of the target object.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Finally, to produce more optimal policies, we take consideration of the workspace limits and their nature (open or constraining walls), which are ignored in [21]. Kiatos et al [22] employ a pushing policy with discrete actions that is learned via Q-learning in order to singulate a target object from each surrounding clutter using depth features that approximate the topography of the scene. However, they made restricted assumptions for the height of the obstacles and the properties of the target object.…”
Section: Related Workmentioning
confidence: 99%
“…However, they made restricted assumptions for the height of the obstacles and the properties of the target object. Sarantopoulos and Kiatos [23] extended the work of [22] by modelling the Qfunction with two different networks, one for each primitive action.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, they evaluated a large number of potential pushes to find the optimal action. Kiatos et al [32] trained a deep Q-network to select push actions in order to singulate a target object from its surrounding clutter with the minimum number of pushes using depth features to approximate the topography of the scene. Sarantopoulos and Kiatos [33] extended the work of [32] by modelling the Q-function with two different networks, one for each primitive action, leading to higher success rates and faster network convergence.…”
Section: B Learning-based Approachesmentioning
confidence: 99%
“…RL-based, training-optimal push policies were presented in [124] for given depth observations of a scene. These policies facilitate grasping objects in cluttered scenes where the target can be invisible by using a deep neural network algorithm with Qlearning.…”
Section: B Suction and Multifunctional Graspingmentioning
confidence: 99%