2015
DOI: 10.1527/tjsai.30.498
|View full text |Cite
|
Sign up to set email alerts
|

Mutual Learning of an Object Concept and Language Model Based on MLDA and NPYLM

Abstract: SummaryHumans develop their concept of an object by classifying it into a category, and acquire language by interacting with others at the same time. Thus, the meaning of a word can be learnt by connecting the recognized word and concept. We consider such an ability to be important in allowing robots to flexibly develop their knowledge of language and concepts. Accordingly, we propose a method that enables robots to acquire such knowledge. The object concept is formed by classifying multimodal information acqu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
1

Relationship

5
2

Authors

Journals

citations
Cited by 21 publications
(35 citation statements)
references
References 16 publications
0
35
0
Order By: Relevance
“…NPYLM has already applied and extended to speech recognition (Neubig et al, 2010), statistical machine translation (Nguyen et al, 2010), or even robotics (Nakamura et al, 2014). For all these research area, we believe PYHSMM would be beneficial for their extension.…”
Section: Discussionmentioning
confidence: 97%
“…NPYLM has already applied and extended to speech recognition (Neubig et al, 2010), statistical machine translation (Nguyen et al, 2010), or even robotics (Nakamura et al, 2014). For all these research area, we believe PYHSMM would be beneficial for their extension.…”
Section: Discussionmentioning
confidence: 97%
“…Recently, various computational models and machine learning methods for multimodal object categorization have been proposed in artificial intelligence, cognitive robotics, and related research fields (Roy and Pentland, 2002 ; Natale et al, 2004 ; Nakamura et al, 2007 , 2009 , 2011a , b , 2014 ; Iwahashi et al, 2010 ; Sinapov and Stoytchev, 2011 ; Araki et al, 2012 ; Griffith et al, 2012 ; Ando et al, 2013 ; Celikkanat et al, 2014 ; Sinapov et al, 2014 ). For example, Sinapov and Stoytchev ( 2011 ) proposed a graph-based multimodal categorization method that allows a robot to recognize a new object by its similarity to a set of familiar objects.…”
Section: Background and Related Workmentioning
confidence: 99%
“…First, in a rigorous manner, we formulate the online joint learning of concepts and a language model, based on a generative model. In contrast to this paper, [6] and [7] provide no theoretical formulation of the joint learning problem, and the online algorithm is not involved in [9]. Hence, this is the first attempt to propose an online joint learning algorithm that achieves the aforementioned learning objectives, as shown in Fig.…”
Section: Introductionmentioning
confidence: 99%
“…Using this relation, the accuracy of both phoneme recognition and object classification can be improved. The idea of joint learning was first proposed in [9] by the co-authors of this paper. However, the algorithm proposed in [9] is a batch-type algorithm, subject to the problems described above.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation