2021
DOI: 10.1177/0278364920979367
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical control of soft manipulators towards unstructured interactions

Abstract: Performing daily interaction tasks such as opening doors and pulling drawers in unstructured environments is a challenging problem for robots. The emergence of soft-bodied robots brings a new perspective to solving this problem. In this paper, inspired by humans performing interaction tasks through simple behaviors, we propose a hierarchical control system for soft arms, in which the low-level controller achieves motion control of the arm tip, the high-level controller controls the behaviors of the arm based o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 81 publications
(34 citation statements)
references
References 62 publications
(76 reference statements)
0
31
0
Order By: Relevance
“…In the domain of robotic manipulation compliant hands like the Pisa/IIT SoftHand 2 [11], the RBO Hand 2 [10], the i-Hy Hand [26] as well as the dexterous gripper presented in [27] have shown to be well suited for grasping and inhand manipulation. Besides hands, the dynamic properties of a system composed of a soft arm with a soft gripper can be used to robustly solve various interaction tasks [28].…”
Section: A Outsourcing Control Of Contact Dynamicsmentioning
confidence: 99%
“…In the domain of robotic manipulation compliant hands like the Pisa/IIT SoftHand 2 [11], the RBO Hand 2 [10], the i-Hy Hand [26] as well as the dexterous gripper presented in [27] have shown to be well suited for grasping and inhand manipulation. Besides hands, the dynamic properties of a system composed of a soft arm with a soft gripper can be used to robustly solve various interaction tasks [28].…”
Section: A Outsourcing Control Of Contact Dynamicsmentioning
confidence: 99%
“…The control policy was learned using Q-learning with the simulation data, demonstrating its effectiveness and robustness in simulation and practice. Jiang et al (2021) adopted the same soft arm and developed a hierarchical control algorithm for complex tasks such as opening a drawer and rotating a handwheel. The control architecture was inspired by human decision-making process.…”
Section: Reinforcement Learning Without Kinematics/ Dynamics Modelmentioning
confidence: 99%
“…In the gradient-based approach, the policy function is maximized with gradient-descent iteratively (Thuruthel et al, 2018;Liu et al, 2020). In contrast to policy search reinforcement learning, valuebased methods generate the optimal control policy by optimizing the value function, including SARSA (Ansari et al, 2017b), Q-learning (You et al, 2017;Jiang et al, 2021), DQN (Satheeshbabu et al, 2019;Wu et al, 2020) and its various extensions (e.g., DDQN (You et al, 2019) and Double DQN). The actor-critic approach is a combination of policy-based and value-based reinforcement learning, where the actor executes referring to the policy; thereby the critic calculates the value function to evaluate the actor (Satheeshbabu et al, 2020).…”
Section: Policy-based Vs Value-based Reinforcement Learningmentioning
confidence: 99%
See 2 more Smart Citations