2015 International Conference on Affective Computing and Intelligent Interaction (ACII) 2015
DOI: 10.1109/acii.2015.7344608
|View full text |Cite
|
Sign up to set email alerts
|

Affect-expressive movement generation with factored conditional Restricted Boltzmann Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(25 citation statements)
references
References 19 publications
0
24
0
Order By: Relevance
“…The final set of methods seeks to learn both the relevant features and the mapping from a (large) corpus of labeled exemplar data. Several different techniques have been proposed, including dimensionality reduction techniques, such as principal component analysis (PCA) [95] or functional PCA [81], and spatio-temporal models such as factored conditional restricted Boltzmann machines (FCRBMs) [3], the Bayesian dynamic network (BDN) [53], factored Gaussian process dynamical models (GPDM) [100], and structured recurrent neural networks (S-RNN) [39]. Although the preceding works aim to automatically extract both the features and the mapping for generating expressive trajectories, and the learning approaches rely on large corpora of human motion data, the majority of works remain focused on a single task (e.g., walking) and a specific robot structure, most commonly humanoid.…”
Section: Learned Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…The final set of methods seeks to learn both the relevant features and the mapping from a (large) corpus of labeled exemplar data. Several different techniques have been proposed, including dimensionality reduction techniques, such as principal component analysis (PCA) [95] or functional PCA [81], and spatio-temporal models such as factored conditional restricted Boltzmann machines (FCRBMs) [3], the Bayesian dynamic network (BDN) [53], factored Gaussian process dynamical models (GPDM) [100], and structured recurrent neural networks (S-RNN) [39]. Although the preceding works aim to automatically extract both the features and the mapping for generating expressive trajectories, and the learning approaches rely on large corpora of human motion data, the majority of works remain focused on a single task (e.g., walking) and a specific robot structure, most commonly humanoid.…”
Section: Learned Featuresmentioning
confidence: 99%
“…The algorithm produces gaits and movements of the system, and a user can "encourage an added level of humor and expressivity" by selecting some preferred gaits to pass on the future evolving generations. More recent examples include spatio-temporal models such as Factored Conditional RestricteFCRBMs [3], the BDN [53], GPDMs [100], and S-RNNs [39], which simultaneously learn both the features and the mapping.…”
Section: Formulatedmentioning
confidence: 99%
“…Importantly, they explicitly address the search for robust computational models that are able to harness the strengths of these systems, most importantly their speed and energy efficiency. The proposed architecture scales naturally to substrates with more neuronal real-estate and can be used for a wide array of tasks that can be mapped to a Bayesian formulation, such as constraint satisfaction problems (Jonke et al, 2016; Fonseca Guerra and Furber, 2017), prediction of temporal sequences (Sutskever and Hinton, 2007), movement planning (Taylor and Hinton, 2009; Alemi et al, 2015), simulation of solid-state systems (Edwards and Anderson, 1975), and quantum many-body problems (Carleo and Troyer, 2017; Czischek et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…Transitions between motion styles can be introduced at will by appropriate settings of the motion style labels over the sequence. Alemi et al [ALP15] applied the FCRBM model to the generation of controlled affective variations of walking motion, based on timevarying valence and arousal labels. Hierarchical FCRBMs.…”
Section: Figure 7: In a Variational Autoencoder (Vae) A Motion Sequen...mentioning
confidence: 99%