2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967845
|View full text |Cite
|
Sign up to set email alerts
|

Learning Interactive Behaviors for Musculoskeletal Robots Using Bayesian Interaction Primitives

Abstract: Musculoskeletal robots that are based on pneumatic actuation have a variety of properties, such as compliance and back-drivability, that render them particularly appealing for human-robot collaboration. However, programming interactive and responsive behaviors for such systems is extremely challenging due to the nonlinearity and uncertainty inherent to their control. In this paper, we propose an approach for learning Bayesian Interaction Primitives for musculoskeletal robots given a limited set of example demo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…More recent works model reaching using machine learning. Campbell et al [14] use imitation learning to learn a joint distribution over the actions of the human and the robot. During testing time, the posterior distribution is inferred from the human's initial motion from which the robot's trajectory is sampled.…”
Section: Reaching Phase Of Handshakingmentioning
confidence: 99%
See 1 more Smart Citation
“…More recent works model reaching using machine learning. Campbell et al [14] use imitation learning to learn a joint distribution over the actions of the human and the robot. During testing time, the posterior distribution is inferred from the human's initial motion from which the robot's trajectory is sampled.…”
Section: Reaching Phase Of Handshakingmentioning
confidence: 99%
“…Major Findings Modelling of reaching behaviours draws heavily on learning from human interactions, unlike other robotic grasping/manipulation tasks, where a lot of it can be learnt from scratch. This provides a strong prior to help make the motions more human-like and can also be used to initialise [14] or guide [17] the learning.…”
Section: Reaching Phase Of Handshakingmentioning
confidence: 99%
“…Vinayavekhin et al [9] aim to bridge this gap in the context of human-robot handshaking using a recurrent network for predicting the human-hand motion and devise a simple controller for the robot's response motion. Such interaction dynamics during human-robot handshaking are also captured implicitly by Campbell et al [10] who learn a joint distribution over the trajectories of the human and the robot. However, their approach is robot-specific and would need to be re-trained with human interaction partners when applied to new robots.…”
Section: Introductionmentioning
confidence: 99%
“…Our estimate of the underlying latent model contains correlated uncertainties between the individual weights, due to a shared error in the phase estimate. Intuitively, this is because a temporal error induces a correlated error in spatial terms due to the phase dependency of the basis functions [4]. Probabilistically, we represent this with the augmented state vector s = [φ, φ, w] and the following definition:…”
Section: Methodsmentioning
confidence: 99%
“…We model the interaction as a Bayesian Interaction Primitive (BIP) [2], [3], [4], a spatiotemporal LfD framework. This model is capable of predicting both an appropriate robotic response (consisting of joint trajectories) as well as contact forces that should be exerted by the robot, given observations of the partner's pose and the force currently exerted by the partner onto the robot.…”
Section: Introductionmentioning
confidence: 99%