2021
DOI: 10.48550/arxiv.2103.09402
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning

Abstract: In this study, we report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning. Owing to its flexibility, the state of the rope was in constant flux during the operation of the robot. This required the robot control system to dynamically correspond to the state of the object at all times. However, a manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance. To resolve this issue, we constructe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Specifically, it consists of three steps: (1) collect sensory-motor information (e.g., camera image, joint angle, and torque) with the robot as learning data when a human teleoperates the robot or performs direct teaching, (2) input the sensor information x t at time t into the model, output the sensor information ŷt+1 at the next time t + 1 , and update the weights of the model to minimize the error between the predicted value ŷt+1 and the true value x t+1 , and (3) at execution time, the robot is made to generate sequential motions by inputting the robot's sensor information x t and inputting the predicted value (motion command value) to the robot for the next time. This method can be used to perform various tasks, such as flexible object handling, which is difficult to do with the conventional method [31,32].…”
Section: Sensorimotor Modulementioning
confidence: 99%
“…Specifically, it consists of three steps: (1) collect sensory-motor information (e.g., camera image, joint angle, and torque) with the robot as learning data when a human teleoperates the robot or performs direct teaching, (2) input the sensor information x t at time t into the model, output the sensor information ŷt+1 at the next time t + 1 , and update the weights of the model to minimize the error between the predicted value ŷt+1 and the true value x t+1 , and (3) at execution time, the robot is made to generate sequential motions by inputting the robot's sensor information x t and inputting the predicted value (motion command value) to the robot for the next time. This method can be used to perform various tasks, such as flexible object handling, which is difficult to do with the conventional method [31,32].…”
Section: Sensorimotor Modulementioning
confidence: 99%
“…The retractor function of the trained RNN model performs perceptual inference (the fusion of the predicted sensory and current sensory inputs) and active inference (behavioral adjustment), resulting in an attractor transition that reduces its prediction error. As a result, multiple tasks-such as handling flexible objects (24,25), liquids, and powder-are realized in the real world with a multi-degree-of-freedom robot. We also realized flexible contactbased object manipulation that was difficult to control with vision alone (26).…”
Section: Introductionmentioning
confidence: 99%