2013
DOI: 10.1007/s00170-013-5413-z
|View full text |Cite
|
Sign up to set email alerts
|

Elimination of unnecessary contact states in contact state graphs for robotic assembly tasks

Abstract: It needs much computation to develop a contact state graph and find an assembly sequence because polyhedral objects consist of a lot of vertices, edges, and faces. In this paper, we propose a new method to eliminate unnecessary contact states in the contact state graph corresponding to a robotic assembly task. In our method, the faces of polyhedral objects are triangulated, and the adjacency of each vertex, edge, and triangle between an initial contact state and a target contact state is defined. Then, this ad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…In view of the pick-and-place processes only one policy is learned for the whole assembly process therein, never-theless, it may fail to work in the small parts assembly tasks since features of the assembly motion vary significantly between the two phases. Another formulation of small parts assembly tasks is peg-in-hole problem [14,16,26] and Reinforcement Learning (RL) based methods are widely applied to learn a decision making policy that maps states to actions through trial-anderror [7,8,12,27]. RL-based approaches typically require the robot to explore the state space.…”
Section: Initial Framementioning
confidence: 99%
“…In view of the pick-and-place processes only one policy is learned for the whole assembly process therein, never-theless, it may fail to work in the small parts assembly tasks since features of the assembly motion vary significantly between the two phases. Another formulation of small parts assembly tasks is peg-in-hole problem [14,16,26] and Reinforcement Learning (RL) based methods are widely applied to learn a decision making policy that maps states to actions through trial-anderror [7,8,12,27]. RL-based approaches typically require the robot to explore the state space.…”
Section: Initial Framementioning
confidence: 99%
“…In [11] proposes a new method for eliminating unnecessary contact states in the state graph corresponding to the problem of assembling polyhedral joints by a robot. The method signi cantly reduces the number of contact states and transitions in the nal contact graph when generating an assembly sequence.…”
Section: Introductionmentioning
confidence: 99%