2019
DOI: 10.1109/lra.2019.2898330
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Symmetries in Reinforcement Learning of Bimanual Robotic Tasks

Abstract: Movement Primitives (MPs) have been widely adopted for representing and learning robotic movements using Reinforcement Learning Policy Search. Probabilistic Movement Primitives (ProMPs) are a kind of MP based on a stochastic representation over sets of trajectories, able of capturing the variability allowed while executing a movement. This approach has proved effective in learning a wide range of robotic movements, but it comes with the need of dealing with a high-dimensional space of parameters. This may be a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…Introducing data augmentation into RL has primarily aimed at enhancing data efficiency [12]. A natural approach to exploit data augmentation in single-agent RL is to obtain more data via image transformation during model training [17,37,18,1]. Another type of approach introduces data augmentation through an innovative contrastive learning framework called the CURL.…”
Section: Data Augmentation In Rlmentioning
confidence: 99%
“…Introducing data augmentation into RL has primarily aimed at enhancing data efficiency [12]. A natural approach to exploit data augmentation in single-agent RL is to obtain more data via image transformation during model training [17,37,18,1]. Another type of approach introduces data augmentation through an innovative contrastive learning framework called the CURL.…”
Section: Data Augmentation In Rlmentioning
confidence: 99%
“…The field of bimanual manipulation has been long studied as a problem involving both hardware design and control [45,21,59,49]. In recent years, researchers applied learning based approach to bimanual manipulation using imitation learning from demonstrations [62,17,54,60] and reinforcement learning [30,1,8,10,18]. For example, Amadio et al [1] proposed to leverage probabilistic movement primitives from human demonstrations.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, researchers applied learning based approach to bimanual manipulation using imitation learning from demonstrations [62,17,54,60] and reinforcement learning [30,1,8,10,18]. For example, Amadio et al [1] proposed to leverage probabilistic movement primitives from human demonstrations. Chitnis et al [8] further introduced a high-level planning policy to combine a set of parameterized primitives to solve complex manipulation tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Bimanual Robot Manipulation: Bimanual manipulation is a practical problem of great interest [31]. Reinforcement Learning (RL) has been applied to bimanual manipulation tasks [3,7,8,17], but RL methods must deal with the increased burden of exploration due to the presence of two arms. Prior work has tried to address the exploration burden by assuming access to parametrized skills such as reaching and twisting [7], by encouraging efficient exploration via intrinsic motivation [8], and leveraging movement primitives from human demonstrations [3].…”
Section: Related Workmentioning
confidence: 99%