2010
DOI: 10.1007/978-3-642-11694-0_7
View full text |Buy / Rent full text
|
Sign up to set email alerts
|

Abstract: Multi-modal grounded language learning connects language predicates to physical properties of objects in the world. Sensing with multiple modalities, such as audio, haptics, and visual colors and shapes while performing interaction behaviors like lifting , dropping, and looking on objects enables a robot to ground non-visual predicates like "empty" as well as visual predicates like "red". Previous work has established that grounding in multi-modal space improves performance on object retrieval from human descr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2011
2011
2011
2011

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 46 publications
0
1
0
Order By: Relevance
“…Nevertheless, the center point would be the prediction of a forward model. Here, the modes of a conditional density may be more interesting than the mean function f (Kopicki et al 2011;Kopicki 2010;Skočaj et al 2010). …”
Section: Forward Modelsmentioning
confidence: 98%
“…Nevertheless, the center point would be the prediction of a forward model. Here, the modes of a conditional density may be more interesting than the mean function f (Kopicki et al 2011;Kopicki 2010;Skočaj et al 2010). …”
Section: Forward Modelsmentioning
confidence: 98%