2017
DOI: 10.3389/fnbot.2017.00066
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

Abstract: In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. I… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

4
4

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 48 publications
(53 reference statements)
0
14
0
1
Order By: Relevance
“…Tangiuchi et al ( 2019 ) summarized their studies and related works on cognitive developmental robotics that can learn a language from interaction with their environment and unsupervised learning methods that enable robots to learn a language without hand-crafted training data. As studies on developmental robotics (Cangelosi and Schlesinger, 2014 ), Cangelosi and his group have proposed computational models for an iCub humanoid robot to ground action words through embodied communications (Marocco et al, 2010 ; Stramandinoli et al, 2017 ; Taniguchi et al, 2017 ; Zhong et al, 2019 ). Marocco et al ( 2010 ) proposed a computational model that enables the iCub humanoid robot to learn the meaning of action words by physically interacting with the environment and linking the effects of actions with the behavior observed on an object before and after the action.…”
Section: Related Workmentioning
confidence: 99%
“…Tangiuchi et al ( 2019 ) summarized their studies and related works on cognitive developmental robotics that can learn a language from interaction with their environment and unsupervised learning methods that enable robots to learn a language without hand-crafted training data. As studies on developmental robotics (Cangelosi and Schlesinger, 2014 ), Cangelosi and his group have proposed computational models for an iCub humanoid robot to ground action words through embodied communications (Marocco et al, 2010 ; Stramandinoli et al, 2017 ; Taniguchi et al, 2017 ; Zhong et al, 2019 ). Marocco et al ( 2010 ) proposed a computational model that enables the iCub humanoid robot to learn the meaning of action words by physically interacting with the environment and linking the effects of actions with the behavior observed on an object before and after the action.…”
Section: Related Workmentioning
confidence: 99%
“…Creating a robot that can learn language from its own sensorimotor experience alone is one of our challenges, which is an essential element for the understanding of symbol emergence in cognitive systems. Many studies have been exploring the challenge in modeling language acquisition in developmental process using neural networks [91], [124], [152] and probabilistic models [5], [88], [153], [154].…”
Section: Language Acquisition By a Robotmentioning
confidence: 99%
“…a word, through perceptual information [18]. Previous studies that investigated the use of crosssituational learning for grounding of objects [13,40] as well as spatial concepts [2,10,41] ensured that one word appears several times together with the same perceptual feature vector so that a corresponding mapping can be created [14]. However, natural language is ambiguous due to homonymy, i.e.…”
Section: A Groundingmentioning
confidence: 99%