2018
DOI: 10.31224/osf.io/n7h9y
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Open Loop Position Control of Soft Continuum Arm Using Deep Reinforcement Learning

Abstract: Soft robots undergo large nonlinear spatial deformations due to both inherent actuation and external loading. The physics underlying these deformations is complex, and often requires intricate analytical and numerical models. The complexity of these models may render traditional model based control difficult and unsuitable. Model-free methods offer an alternative for analyzing the behavior of such complex systems without the need for elaborate modeling techniques.In this paper, we present a model-free approach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
23
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(23 citation statements)
references
References 4 publications
0
23
0
Order By: Relevance
“…Model-free methods offer an alternative for analyzing the behavior of such complex systems without the need for elaborate modeling techniques. In our recent work [77] we presented preliminary, yet promising results on the use of Reinforcement Learning (RL) for position control of the BR 2 soft arm. The main benefit of RL over other neuro-adaptive control strategies [78][79][80] is that RL directly learns an optimal policy from experience.…”
Section: Example Results With Deep Reinforcement Learning For Soft Romentioning
confidence: 99%
“…Model-free methods offer an alternative for analyzing the behavior of such complex systems without the need for elaborate modeling techniques. In our recent work [77] we presented preliminary, yet promising results on the use of Reinforcement Learning (RL) for position control of the BR 2 soft arm. The main benefit of RL over other neuro-adaptive control strategies [78][79][80] is that RL directly learns an optimal policy from experience.…”
Section: Example Results With Deep Reinforcement Learning For Soft Romentioning
confidence: 99%
“…Since the method requires no information about the robot or the environment a-priori, it enables control in complex scenarios, where highly complex physics-based models may have poorly observable parameters or states. It has also been shown that inverse kinematics for continuum robots may be approximated by a multilayer perceptron network ( George et al, 2017 ; Grassmann et al, 2018 ; Lai et al, 2019 ), with multi-agent reinforcement learning ( Ansari et al, 2016 ), with K-nearest neighbors and Gaussian mixture regression ( Chen and Lau, 2016 ), and with deep reinforcement learning ( Satheeshbabu et al, 2019 ). For reconfigurable robots subject to varying loads, it has been shown that classification of the load state using long short-term memory networks can substantially improve open-loop kinematic control ( Nicolai et al, 2020 ).…”
Section: Review Of the State Of The Artmentioning
confidence: 99%
“…It was shown that model less control can achieve higher accuracy in positioning [41] the manipulator. Further, researchers used goal babbling technique [42], RNN based control [43], [44], reinforcement learning [45], deep reinforcement learning [46] for it. However, these models suffer from following drawbacks.…”
Section: Introductionmentioning
confidence: 99%
“…The model needs to be trained from the scratch for the new conditions. Some model less algorithms [46] have shown good performance in noisy conditions or slightly changed conditions. However, the performance decreases as the magnitude of variation increases.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation