2016
DOI: 10.1109/tcds.2016.2565542
|View full text |Cite
|
Sign up to set email alerts
|

Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

Abstract: Abstract-In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals. We address the problem of learning novel words by a robot that has no prior knowledge of these words except for a primitive acoustic model. Further, we propose a method that allows a robot to effectively use the learned words and their meanings for self-localization tasks. The proposed method is nonparametric Bayesian spatial con… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
32
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
7

Relationship

6
1

Authors

Journals

citations
Cited by 31 publications
(32 citation statements)
references
References 35 publications
0
32
0
Order By: Relevance
“…Theoretical and empirical validations should be applied for further applications. So far, many researchers, including the authors, have proposed a lot of cognitive models for robots: object concept formation based on its appearance, usage and functions [41], formation of integrated concept of objects and motions [42], grammar learning [16], language understanding [43], spatial concept formation and lexical acquisition [8,20,44], simultaneous phoneme and word discovery [45-47] and cross-situational learning [48,49]. These models are regarded as an integrative model that are constructed by combining small-scale models.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Theoretical and empirical validations should be applied for further applications. So far, many researchers, including the authors, have proposed a lot of cognitive models for robots: object concept formation based on its appearance, usage and functions [41], formation of integrated concept of objects and motions [42], grammar learning [16], language understanding [43], spatial concept formation and lexical acquisition [8,20,44], simultaneous phoneme and word discovery [45-47] and cross-situational learning [48,49]. These models are regarded as an integrative model that are constructed by combining small-scale models.…”
Section: Resultsmentioning
confidence: 99%
“…A further advancement of such cognitive systems allows the robots to find meanings of words by treating a linguistic input as another modality [13][14][15]. Cognitive models have recently become more complex in realizing various cognitive capabilities: grammar acquisition [16], language model learning [17], hierarchical concept acquisition [18,19], spatial concept acquisition [20], motion skill acquisition [21], and task planning [7] (see Fig. 1).…”
Section: Introductionmentioning
confidence: 99%
“…• The extension for a mutual segmentation model of sound strings and situations based on multimodal information will be achieved based on a multimodal LDA with nested Pitman-Yor language model (Nakamura et al, 2014) and a spatial concept acquisition model that integrates self-localization and unsupervised word discovery from spoken sentences (Taniguchi et al, 2016a).…”
Section: Discussionmentioning
confidence: 99%
“…Taguchi et al (2011) proposed an unsupervised method for simultaneously categorizing self-positions and phoneme sequences from user speech without any prior language model. Taniguchi et al (2016Taniguchi et al ( , 2018a proposed the nonparametric Bayesian Spatial Concept Acquisition method (SpCoA) using an unsupervised word segmentation method, latticelm (Neubig et al 2012), and SpCoA++ for highly accurate lexical acquisition as a result of updating the language model. Gu et al (2016) proposed a method to learn relative spatial concepts, i.e., the words related to distance and direction, from the positional relationship between an utterer and objects.…”
Section: Introductionmentioning
confidence: 99%