2014 IEEE/RSJ International Conference on Intelligent Robots and Systems 2014
DOI: 10.1109/iros.2014.6942621
|View full text |Cite
|
Sign up to set email alerts
|

Mutual learning of an object concept and language model based on MLDA and NPYLM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
28
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 35 publications
(28 citation statements)
references
References 10 publications
0
28
0
Order By: Relevance
“…• The extension for a mutual segmentation model of sound strings and situations based on multimodal information will be achieved based on a multimodal LDA with nested Pitman-Yor language model (Nakamura et al, 2014) and a spatial concept acquisition model that integrates self-localization and unsupervised word discovery from spoken sentences (Taniguchi et al, 2016a).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…• The extension for a mutual segmentation model of sound strings and situations based on multimodal information will be achieved based on a multimodal LDA with nested Pitman-Yor language model (Nakamura et al, 2014) and a spatial concept acquisition model that integrates self-localization and unsupervised word discovery from spoken sentences (Taniguchi et al, 2016a).…”
Section: Discussionmentioning
confidence: 99%
“…As shown in Figure 2, the model generating observations through categories on each sensor from an integrated concept in an agent can be extended as the model generating observations through categories on each agent from a word in a multi-agent system. Figure 2 (a) represents a graphical model for probabilistic generative model multimodal categorization, e.g., Nakamura et al (2014). It can integrate multimodal information, e.g., haptics and visual information, and form categories.…”
Section: Expansion Of a Multimodal Categorizer From Personal To Intermentioning
confidence: 99%
See 1 more Smart Citation
“…As a method that further improves the performance of the lexical acquisition, a mutual learning method was proposed by Nakamura et al on the basis of the integration of the learning of object concepts with a language model [35], [36]. Following a similar approach, Heymann et al proposed a method that alternately and repeatedly updates phoneme recognition results and the language model by using unsupervised word segmentation [37].…”
Section: Discussionmentioning
confidence: 99%
“…One interesting extension of this model concerns the selfacquisition of language combining automatic speech recognition with the MLDA system. Here, unsupervised morphological analysis [85] is performed on phoneme recognition results in order to acquire a vocabulary. The point of this model is that multimodal categories are used for learning lexical information and vice versa.…”
Section: Unsupervised Learning Viewpoint: From Multimodal Categorimentioning
confidence: 99%