2015
DOI: 10.1088/1748-3190/10/3/035006
|View full text |Cite
|
Sign up to set email alerts
|

Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space

Abstract: This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0
1

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(35 citation statements)
references
References 67 publications
(110 reference statements)
0
32
0
1
Order By: Relevance
“…Any control algorithm must therefore factor in the end load . Though analytical equations for the inverse kinematics cannot be obtained, the fast and accurate forward analysis method may be used to train a model-free control framework using a neural network approach for [39] different end loads applied.…”
Section: Implication On Controlsmentioning
confidence: 99%
“…Any control algorithm must therefore factor in the end load . Though analytical equations for the inverse kinematics cannot be obtained, the fast and accurate forward analysis method may be used to train a model-free control framework using a neural network approach for [39] different end loads applied.…”
Section: Implication On Controlsmentioning
confidence: 99%
“…In our recent work [77] we presented preliminary, yet promising results on the use of Reinforcement Learning (RL) for position control of the BR 2 soft arm. The main benefit of RL over other neuro-adaptive control strategies [78][79][80] is that RL directly learns an optimal policy from experience. This precludes the need for a separate control strategy to choose optimal actions while transitioning between states.…”
Section: Example Results With Deep Reinforcement Learning For Soft Romentioning
confidence: 99%
“…A total of six passive markers are placed along the prototype, two within the hook, and four along the filament. A Direct Linear Transformation algorithm [42], previously employed for soft robots reconstruction [43] in water environment, was used to obtain the position in space of the markers. Eight positions within the tank were used as calibration points.…”
Section: A Set-upmentioning
confidence: 99%