2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016
DOI: 10.1109/iros.2016.7759412
|View full text |Cite
|
Sign up to set email alerts
|

A model for verifiable grounding and execution of complex natural language instructions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(24 citation statements)
references
References 16 publications
0
22
0
Order By: Relevance
“…The language grounding step was completed in 0.33 seconds for (a) and 0.37 seconds for (b); the inferred command and grounding triggered the motion planning and task execution routine that required 10 seconds in each case. Additional examples of physical demonstrations of natural language interfaces to robots based on variations of the DCG on robot torsos, unmanned ground vehicles, and assistive robotic manipulators are described in Boteanu et al (2016), Oh et al (2016), and Broad et al (2017).…”
Section: Resultsmentioning
confidence: 99%
“…The language grounding step was completed in 0.33 seconds for (a) and 0.37 seconds for (b); the inferred command and grounding triggered the motion planning and task execution routine that required 10 seconds in each case. Additional examples of physical demonstrations of natural language interfaces to robots based on variations of the DCG on robot torsos, unmanned ground vehicles, and assistive robotic manipulators are described in Boteanu et al (2016), Oh et al (2016), and Broad et al (2017).…”
Section: Resultsmentioning
confidence: 99%
“…The question of how to effectively convert between natural language instructions and robot behavior has been widely studied in previous work [50,34,24,14,9,47,8,18,11,1,33,28,36,27,40,7,2,19,37]. So far, there have been three categories of behavior specifications that these works have mapped natural language to: action sequences, goal states, and LTL specifications.…”
Section: Related Workmentioning
confidence: 99%
“…Lignos et al [28] instead used off-the-shelf parsers to interpret the full space of natural language, but do not provide a method to learn from new robot-specific language, and therefore their approach may be limited by the particular corpus used by the existing parser. In contrast, Boteanu et al [7] collected a crowdsourced corpus of block-sorting instructions to train a Verifiable Distributed Correspondence Graph model that mapped natural language to structured English. In this work, we present a corpus that is an order of magnitude larger, and apply the sequence-to-sequence framework that allows grounding to the full space of LTL formulae (instead of the GR(1) fragment).…”
Section: Related Workmentioning
confidence: 99%
“…Using natural language is the most efficient way to issue a command to robots, and since they have to operate in the physical world, understanding the way humans describe space is crucial. Current state-ofthe-art approaches to grounding natural language commands in general, and spatial commands in particular, are based on probabilistic graphical models (PGM) such as Generalized Grounding Graphs (G 3 ) (Tellex et al, 2011) and Distributed Correspondence Graphs (DCG) (Howard et al, 2014) and their modifications (Broad et al, 2016;Paul et al, 2016;Boteanu et al, 2016;Chung et al, 2015).…”
Section: Related Workmentioning
confidence: 99%