DOI: 10.1007/978-3-642-11694-0_7
View full text
Danijel Skočaj, Matej Kristan, Alen Vrečko, Aleš Leonardis, Mario Fritz, Michael Stark, Bernt Schiele, Somboon Hongeng, Jeremy L. Wyatt

Abstract: Multi-modal grounded language learning connects language predicates to physical properties of objects in the world. Sensing with multiple modalities, such as audio, haptics, and visual colors and shapes while performing interaction behaviors like lifting , dropping, and looking on objects enables a robot to ground non-visual predicates like "empty" as well as visual predicates like "red". Previous work has established that grounding in multi-modal space improves performance on object retrieval from human desc…

expand abstract