2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL) 2013
DOI: 10.1109/devlrn.2013.6652563
|View full text |Cite
|
Sign up to set email alerts
|

Learning semantic components from subsymbolic multimodal perception

Abstract: Abstract-Perceptual systems often include sensors from several modalities. However, existing robots do not yet sufficiently discover patterns that are spread over the flow of multimodal data they receive. In this paper we present a framework that learns a dictionary of words from full spoken utterances, together with a set of gestures from human demonstrations and the semantic connection between words and gestures. We explain how to use a nonnegative matrix factorization algorithm to learn a dictionary of comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…Many researchers have investigated methods for robots to learn relationships among multimodal sensory data [12]- [16]. The purpose of these studies was to predict one modality from another by using models.…”
Section: Related Workmentioning
confidence: 99%
“…Many researchers have investigated methods for robots to learn relationships among multimodal sensory data [12]- [16]. The purpose of these studies was to predict one modality from another by using models.…”
Section: Related Workmentioning
confidence: 99%
“…In [50], the authors use non-negative matrix factorization to learn a joint representation of gestures and spoken words. They show that the learned representations can be used to retrieve one modality given the other (e.g.…”
Section: Multimodal Fusionmentioning
confidence: 99%
“…Many researchers have investigated methods for robots to learn relationships among multimodal sensory data [10], [11], [12], [13], [14]. The purpose of these studies was to predict one modality from another by using models.…”
Section: Introductionmentioning
confidence: 99%