2020
DOI: 10.3389/fnbot.2020.00052
|View full text |Cite
|
Sign up to set email alerts
|

Crossmodal Language Grounding in an Embodied Neurocognitive Model

Abstract: Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregivers. From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration. However, characterizing the underlying mechanisms in the brain is difficult and … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 63 publications
0
19
0
Order By: Relevance
“…The recent review by Uc-Cetina et al (2021) illustrates the applicability of RL in NLP to some extent, such as machine translation, language understanding, and text generation. The authors also suggest considering embodiment (Heinrich et al, 2020), textual domain knowledge, and conversational settings. Bisk et al (2020) focus further on embodiment and highlight the importance of physical and social context, more precisely, multimodal sensory experiences, to apprehend the coherency of words and actions.…”
Section: Reinforcement Learning and Computational Language Understanding Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The recent review by Uc-Cetina et al (2021) illustrates the applicability of RL in NLP to some extent, such as machine translation, language understanding, and text generation. The authors also suggest considering embodiment (Heinrich et al, 2020), textual domain knowledge, and conversational settings. Bisk et al (2020) focus further on embodiment and highlight the importance of physical and social context, more precisely, multimodal sensory experiences, to apprehend the coherency of words and actions.…”
Section: Reinforcement Learning and Computational Language Understanding Methodsmentioning
confidence: 99%
“…However, models like GPT-3 and DALL-E consider only disembodied language learning without any sensorimotor grounding because, unlike robots, they cannot physically interact with the world. Insights for grounded language learning in robotics (Heinrich et al, 2020) with sequential decision-making settings (Akakzia et al, 2021;Lynch and Sermanet, 2021) and embodied cognition (Feldman and Narayanan, 2004;Fischer and Zwaan, 2008) accentuate the need for embodied grounding. This includes physical interaction and multiple sensory modalities to develop systems that understand language more like humans (Anderson, 1972;Wermter et al, 2009;McClelland et al, 2020).…”
Section: Methodsmentioning
confidence: 99%
“…Most of these models perform unidirectionally. Our approach aims to map language commands to actions and perceived actions to language, as in the embodied neurocognitive model of Heinrich et al ( 2020 ). Other contribution has been made to address language learning (Morse and Cangelosi, 2017 ), motor learning (Demiris and Khadhouri, 2006 ), and affordance learning (Stoytchev, 2008 ; Jamone et al, 2018 ), by successful integration of language and the physical body.…”
Section: Related Workmentioning
confidence: 99%
“…Heinrich et al (2019) provide a vision dataset of 60 object-hand interactions. Heinrich et al (2018Heinrich et al ( , 2020 recorded the EMIL dataset on embodied multi-modal interaction for language learning. The dataset focuses on low-level crossmodal perception during the environmental interactions from a body-rational perspective.…”
Section: Datasetsmentioning
confidence: 99%
“…In the experimental setup, NICO is seated at a table with appropriate dimensions for a child-sized robot, as depicted in Figure 7. This experimental setup was initially introduced by Kerzel and Wermter (2017b) and subsequently adapted for various studies related to visuomotor learning and crossmodal object interaction (e.g., Eppe et al, 2017;Kerzel et al, , 2019bHeinrich et al, 2018Heinrich et al, , 2020. Therefore, the experimental setup recreates a realistic neurorobotic learning scenario.…”
Section: Experimental Setup and Processmentioning
confidence: 99%