2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids) 2016
DOI: 10.1109/humanoids.2016.7803340
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic movement primitives in latent space of time-dependent variational autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(67 citation statements)
references
References 10 publications
0
67
0
Order By: Relevance
“…While AEs and VAEs can be used to learn the whole-body human posture, they cannot be used to represent whole-body trajectories over time in a smooth and coherent manner (i.e., without jolts) since there is no postural time-dependence. This issue was well explained by Chen et al in [8], which proposed to force the temporal dependency by learning DMPs in the latent space. Their method, called VAE-DMP, uses Deep Variational Bayes Filters (DVBF) [24], where Bayesian filtering is applied on latent variables with temporal dependencies, with a recurrent deep neural network composed of chained VAEs An improvement of VAE-DMP, called Variational Time Series Feature Extractor (VTSFE), was proposed by Chaveroche et al in [5] to encode features of the time series for both classification and prediction purposes.…”
Section: B Dimensionality Reduction In a Latent Spacementioning
confidence: 94%
See 2 more Smart Citations
“…While AEs and VAEs can be used to learn the whole-body human posture, they cannot be used to represent whole-body trajectories over time in a smooth and coherent manner (i.e., without jolts) since there is no postural time-dependence. This issue was well explained by Chen et al in [8], which proposed to force the temporal dependency by learning DMPs in the latent space. Their method, called VAE-DMP, uses Deep Variational Bayes Filters (DVBF) [24], where Bayesian filtering is applied on latent variables with temporal dependencies, with a recurrent deep neural network composed of chained VAEs An improvement of VAE-DMP, called Variational Time Series Feature Extractor (VTSFE), was proposed by Chaveroche et al in [5] to encode features of the time series for both classification and prediction purposes.…”
Section: B Dimensionality Reduction In a Latent Spacementioning
confidence: 94%
“…AE-ProMPs is computationally efficient and suitable for our application. We compare it with similar methods proposed to encode whole-body movements in a latent space, namely VAE-DMP [8] and VTSFE [5]. The first exploits variational autoencoders (VAE) to compress the movement in a reduced latent space, then forces the continuity of the latent space trajectories using Dynamic Motion Primitives (DMPs).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An early attempt aimed to link neural networks to path planning by specifying obstacles into topologically ordered neural maps and using neural activity gradient to trace the shortest path, with neural activity evolving towards a state corresponding to a minimum of a Lyapunov function [11]. More recently, a method was developed that enables the representation of high dimensional humanoid movements in the low-dimensional latent space of a time-dependent variational autoencoder framework [12]. Reinforcement Learning (RL) approaches have also been proposed for motion planning applications [13], [14].…”
Section: Related Workmentioning
confidence: 99%
“…valid regions in the original continuous space; the states having low posterior probabilities (or equivalently, having high costs in the planning problem) up to the current time-step tend not to be re-sampled (line [12][13]. After the forward recursion, the algorithm picks the most likely final state y * K (line [15][16], and then constructs the whole trajectory with backtracking by looking at its ancestry (line [17][18][19][20].…”
Section: Dynamic Programming For Computing Map Trajectory Using Pamentioning
confidence: 99%