2021 IEEE International Conference on Development and Learning (ICDL) 2021
DOI: 10.1109/icdl49984.2021.9515668
|View full text |Cite
|
Sign up to set email alerts
|

Embodied Language Learning with Paired Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
15
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 8 publications
1
15
0
Order By: Relevance
“…PVAE is able to translate from actions to descriptions with 100% accuracy for all 144 patterns, including 108 training and 36 test patterns (see Table 1 ). This matches the results reported in Özdemir et al ( 2021 ).…”
Section: Embodied Spatial Relation Learningsupporting
confidence: 93%
See 4 more Smart Citations
“…PVAE is able to translate from actions to descriptions with 100% accuracy for all 144 patterns, including 108 training and 36 test patterns (see Table 1 ). This matches the results reported in Özdemir et al ( 2021 ).…”
Section: Embodied Spatial Relation Learningsupporting
confidence: 93%
“…A bidirectional embodied model, such as the PRAE (paired recurrent autoencoders; Yamada et al, 2018 ), is attractive to approach grounding of language, since it is able to both execute simple robot actions given language descriptions and to generate language descriptions given executed and visually perceived actions. In our recent extension of the model in a robotic scenario (Özdemir et al, 2021 ), schematically shown in Figure 2 , two cubes of different colors are placed on a table at which the NICO robot (Kerzel et al, 2017 ) is seated to interact with them (see Figure 3 ). Given proprioceptive and visual input, the approach is capable of translating robot actions to textual descriptions.…”
Section: Embodied Spatial Relation Learningmentioning
confidence: 99%
See 3 more Smart Citations