2011
DOI: 10.1163/016918611x595035
|View full text |Cite
|
Sign up to set email alerts
|

Grounding of Word Meanings in Latent Dirichlet Allocation-Based Multimodal Concepts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(23 citation statements)
references
References 9 publications
0
14
0
Order By: Relevance
“…Their method allowed the robot to acquire phonemes and words from visual and auditory information through interaction with the human. Nakamura et al ( 2011a , b ) proposed multimodal latent Dirichlet allocation (MLDA) and a multimodal hierarchical Dirichlet process (MHDP) that enables the categorization of objects from multimodal information, i.e., visual, auditory, haptic, and word information. Their methods enabled more accurate object categorization by using multimodal information.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Their method allowed the robot to acquire phonemes and words from visual and auditory information through interaction with the human. Nakamura et al ( 2011a , b ) proposed multimodal latent Dirichlet allocation (MLDA) and a multimodal hierarchical Dirichlet process (MHDP) that enables the categorization of objects from multimodal information, i.e., visual, auditory, haptic, and word information. Their methods enabled more accurate object categorization by using multimodal information.…”
Section: Related Workmentioning
confidence: 99%
“…Attamimi et al ( 2016 ) proposed multilayered MLDA (mMLDA) that hierarchically integrates multiple MLDAs as an extension of Nakamura et al ( 2011a ). They performed an estimation of the relationships among words and multiple concepts by weighting the learned words according to their mutual information as a post-processing step.…”
Section: Related Workmentioning
confidence: 99%
“…Qu & Chai's method based on the IBM translation model [12] estimates the word-entity association probability. Nakamura et al proposed a method to learn object concepts and word meanings from multimodal information and verbal information [10]. The method proposed in [10] is a categorization method based on multimodal latent Dirichlet allocation (MLDA) that enables the acquisition of object concepts from multimodal information, such as visual, auditory, and haptic information [13].…”
Section: A Lexical Acquisitionmentioning
confidence: 99%
“…Nakamura et al proposed a method to learn object concepts and word meanings from multimodal information and verbal information [10]. The method proposed in [10] is a categorization method based on multimodal latent Dirichlet allocation (MLDA) that enables the acquisition of object concepts from multimodal information, such as visual, auditory, and haptic information [13]. Araki et al addressed the development of a method combining unsupervised word segmentation from uttered sentences by a nested Pitman-Yor language model (NPYLM) [14] and the learning of object concepts by MLDA [11].…”
Section: A Lexical Acquisitionmentioning
confidence: 99%
“…Yu and Ballard (2004) explored a framework that learns the association between objects and their spoken names in day-to-day tasks. Nakamura et al (2011) introduced a multimodal categorization applied to robotics. Their framework exploited the relation of concepts in different modalities (visual, audio and haptic) using a Multimodal latent Dirichlet allocation.…”
Section: Introductionmentioning
confidence: 99%