2016
DOI: 10.1080/01691864.2016.1172507
|View full text |Cite
|
Sign up to set email alerts
|

Learning word meanings and grammar for verbalization of daily life activities using multilayered multimodal latent Dirichlet allocation and Bayesian hidden Markov models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(20 citation statements)
references
References 13 publications
0
20
0
Order By: Relevance
“…Theoretical and empirical validations should be applied for further applications. So far, many researchers, including the authors, have proposed a lot of cognitive models for robots: object concept formation based on its appearance, usage and functions [41], formation of integrated concept of objects and motions [42], grammar learning [16], language understanding [43], spatial concept formation and lexical acquisition [8,20,44], simultaneous phoneme and word discovery [45-47] and cross-situational learning [48,49]. These models are regarded as an integrative model that are constructed by combining small-scale models.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Theoretical and empirical validations should be applied for further applications. So far, many researchers, including the authors, have proposed a lot of cognitive models for robots: object concept formation based on its appearance, usage and functions [41], formation of integrated concept of objects and motions [42], grammar learning [16], language understanding [43], spatial concept formation and lexical acquisition [8,20,44], simultaneous phoneme and word discovery [45-47] and cross-situational learning [48,49]. These models are regarded as an integrative model that are constructed by combining small-scale models.…”
Section: Resultsmentioning
confidence: 99%
“…A further advancement of such cognitive systems allows the robots to find meanings of words by treating a linguistic input as another modality [13][14][15]. Cognitive models have recently become more complex in realizing various cognitive capabilities: grammar acquisition [16], language model learning [17], hierarchical concept acquisition [18,19], spatial concept acquisition [20], motion skill acquisition [21], and task planning [7] (see Fig. 1).…”
Section: Introductionmentioning
confidence: 99%
“…Attamimi et al ( 2016 ) proposed multilayered MLDA (mMLDA) that hierarchically integrates multiple MLDAs as an extension of Nakamura et al ( 2011a ). They performed an estimation of the relationships among words and multiple concepts by weighting the learned words according to their mutual information as a post-processing step.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, their study did not use data that were autonomously obtained by the robot. In Attamimi et al ( 2016 ), it was not possible for the robot to learn the relationships between self-actions and words because human motions obtained by the motion capture system based on Kinect and a wearable sensor device attached to a human were used as action data. In our study, the robot learns the action category based on subjective self-action.…”
Section: Related Workmentioning
confidence: 99%
“…As the review done by Tenenbaum (2011), the hierarchical framework can be detailed formulated in a probabilistic way, in which the abstract knowledge also acts as a prior to guide our learning and reasoning. The probabilistic based models have also been applied in acquiring abstract knowledge from robot-environment interaction (Konidaris et al 2015), human-robot interaction (Iwahashi 2008) and multimodal living environment (Attamimi 2016). Additionally, the hierarchical architecture can also be implemented as connectionist models.…”
Section: Embodied Symbolic Emergence In a Hierarchical Structurementioning
confidence: 99%