Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/810
|View full text |Cite
|
Sign up to set email alerts
|

Grounded Language Learning: Where Robotics and NLP Meet

Abstract: Grounded language acquisition is concerned with learning the meaning of language as it applies to the physical world. As robots become more capable and ubiquitous, there is an increasing need for non-specialists to interact with and control them, and natural language is an intuitive, flexible, and customizable mechanism for such communication. At the same time, physically embodied agents offer a way to learn to understand natural language in the context of the world to which it refers. This paper gives an over… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
22
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 2 publications
(3 reference statements)
0
22
0
Order By: Relevance
“…In prior ITL tasks, utterances are usually asserted in a context where their intended contents are satisfied (e.g., [38]). Most approaches assume the teacher's utterance is either a request to perform a specific action (e.g., [3]), or it describes the current state (e.g., [42,56,57,60]), or both are supported [48]. This means the non-linguistic context provides a positive example for learning to interpret the teacher's assertion: for instance, the agent can infer from the instruction "Put a red block on a blue block" that there is a red block and a blue block in the current visual scene that satisfy the preconditions of the put action, and its task in updating its model of symbol grounding is to estimate those positive exemplars from that scene, and update its grounding parameters with that positive evidence.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…In prior ITL tasks, utterances are usually asserted in a context where their intended contents are satisfied (e.g., [38]). Most approaches assume the teacher's utterance is either a request to perform a specific action (e.g., [3]), or it describes the current state (e.g., [42,56,57,60]), or both are supported [48]. This means the non-linguistic context provides a positive example for learning to interpret the teacher's assertion: for instance, the agent can infer from the instruction "Put a red block on a blue block" that there is a red block and a blue block in the current visual scene that satisfy the preconditions of the put action, and its task in updating its model of symbol grounding is to estimate those positive exemplars from that scene, and update its grounding parameters with that positive evidence.…”
Section: Related Workmentioning
confidence: 99%
“…[49,56,57]). The second approach is to build explicit classifiers for individual words and concepts, which are then combined with methods for joining these classifiers together to provide meanings of extended linguistic expressions via principles of semantic compositionality (e.g., [22,40,42,48,65]). This second approach has been the traditional choice in developing robot systems that learn to interpret instructions [17,22,40] (although see also Karamcheti et al [36], Al-Omari et al [2], Anderson et al [3]).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In the literature, several novel works have been proposed that employ cutting-edge approaches to address that challenge. For example, authors in [24] centered on the use case of people teaching a robot about objects and tasks in its environment via unconstrained natural language. They designed statistical machine learning approaches to allow robots to gain knowledge about the world from interactions with users, while simultaneously acquiring semantic representations of language about objects and tasks.…”
Section: Related Workmentioning
confidence: 99%
“…To succeed, VLN agents must internalize the (possibly noisy) natural language instruction, plan action sequences, and move in environments that dynamically change what is presented in their visual fields. These challenging settings bring simulation-based VLN work closer to real-world, language-based interaction with robots [28].…”
Section: Introductionmentioning
confidence: 99%