2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
DOI: 10.1109/iros51168.2021.9635954
|View full text |Cite
|
Sign up to set email alerts
|

In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(17 citation statements)
references
References 32 publications
0
12
0
Order By: Relevance
“…Therefore, when a command with the same position and task completion time was input, the same motion with very little variation was generated without considering the size and shape of the object. Recently, methods using raw images have been studied [51], [52]. These studies add raw images to the NN input and learn to generate appropriate motions in response to changes in the position, and shape of the object.…”
Section: ) Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, when a command with the same position and task completion time was input, the same motion with very little variation was generated without considering the size and shape of the object. Recently, methods using raw images have been studied [51], [52]. These studies add raw images to the NN input and learn to generate appropriate motions in response to changes in the position, and shape of the object.…”
Section: ) Resultsmentioning
confidence: 99%
“…However, the spatial information in this study is limited to the center position of the pancake at the beginning of the task, and the shape and size of the object were not considered. Therefore, our future work is to integrate the proposed method with a real-time image-based motion generation method [51] and a method that considers the shape and size of multiple objects [52] to expand the tasks that can be performed by the robot in space and time.…”
Section: Discussionmentioning
confidence: 99%
“…Two types of neural networks were used: a CNN-based deep autoencoder for image feature extraction and a full connected (FC) network for integrating image features and robot joint angles. Suzuki et al [4] enabled robots to manipulate ropes by two arms on the bases of the method of Yang et al [9]. In addition to the image and the joint angle of the robot, the proximity sensor was used to recognize the rope state.…”
Section: A Deep Learning-based Motion Learningmentioning
confidence: 99%
“…However, it is very challenging to make robots manipulate these flexible objects. Flexible object manipulations achieved by robots so far include folding clothes, tying ropes, and folding paper [1][2][3] [4]. For these tasks, it is important to predict how the object will be deformed by the robot's actions, and these tasks have been accomplished mainly using vision.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation