2018 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2018
DOI: 10.1109/robio.2018.8665255
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge-Driven Deep Deterministic Policy Gradient for Robotic Multiple Peg-in-Hole Assembly Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…A deep Q-learning network was used to model the learning process of assembly skills [15]. Hou et al proposed the knowledge-driven deep deterministic policy gradient algorithm for robotic multiple peg-in-hole assembly tasks [34]. The present work deals with the peg-hole-based assembly of rigid parts.…”
Section: B Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A deep Q-learning network was used to model the learning process of assembly skills [15]. Hou et al proposed the knowledge-driven deep deterministic policy gradient algorithm for robotic multiple peg-in-hole assembly tasks [34]. The present work deals with the peg-hole-based assembly of rigid parts.…”
Section: B Related Workmentioning
confidence: 99%
“…The assembly components might be damaged owing to the imprecise pose. Compared with the peg-hole task [33], [34], the object complexity increases the difficulty of robotic skills acquisition. Given the above, the main problem to be solved is to make robots acquire skills of object location determination and pose adjustment.…”
Section: B Problem Setupmentioning
confidence: 99%
“…An actor-critic method was also used to complete high-precision assembly tasks (Wang et al, 2019). However, these methods need to discretize the action space to output discrete actions (Hou et al, 2018;Beltran-Hernandez et al, 2020a). The most significant limitation of this approach is the curse of dimensionality: as the degrees of freedom increase, the number of actions will increase exponentially (Lillicrap et al, 2015).…”
Section: Introductionmentioning
confidence: 99%
“…For tasks that require precise control of motion, the situation is even worse because they require finer discretization. This fine-grained discretization will lead to ineffective exploration in a large action space and reduce the learning speed (Hou et al, 2018;Lillicrap et al, 2015). Therefore, these methods cannot well solve the problems in continuous and high-dimensional action space, such as robot control (Lillicrap et al, 2015;Beltran-Hernandez et al, 2020a).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation