2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) 2017
DOI: 10.1109/roman.2017.8172349
|View full text |Cite
|
Sign up to set email alerts
|

Contextual awareness: Understanding monologic natural language instructions for autonomous robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…A well-studied problem in this setting is the Vision-and-language Navigation, where the tasks consist of navigating to a desired location in an environment, given a natural language command and an egocentric view from the agent's current position [2,9,36]. Another subcategory of instruction-following involves instructing an embodied agent using natural language [32,15,5,29,30,22,31]. Our proposed setting is different from instruction-following, in that the goal of the target task is not being communicated using language alone; instead, a demonstration for a related task (source task) is available, and language is used to communicate the difference between the demonstrated task and the target task.…”
Section: Related Workmentioning
confidence: 99%
“…A well-studied problem in this setting is the Vision-and-language Navigation, where the tasks consist of navigating to a desired location in an environment, given a natural language command and an egocentric view from the agent's current position [2,9,36]. Another subcategory of instruction-following involves instructing an embodied agent using natural language [32,15,5,29,30,22,31]. Our proposed setting is different from instruction-following, in that the goal of the target task is not being communicated using language alone; instead, a demonstration for a related task (source task) is available, and language is used to communicate the difference between the demonstrated task and the target task.…”
Section: Related Workmentioning
confidence: 99%
“…Understanding sequences of natural language utterances has been addressed using semantic parsing (e.g., Miller et al, 1996;MacMahon et al, 2006;Chen and Mooney, 2011;Artzi and Zettlemoyer, 2013;Artzi et al, 2014;Long et al, 2016;Iyyer et al, 2017;Suhr et al, 2018;Arkin et al, 2017;Broad et al, 2017). Interactions were also used for semantic parser induction (Artzi and Zettlemoyer, 2011;Thomason et al, 2015;Wang et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…More recently, Paul et al proposed a method to deal with abstract spatial concepts in instructions [10]. Arkin et al proposed a variant of a contemporary probabilistic graphical model for language understanding in a context [11].…”
Section: Related Workmentioning
confidence: 99%
“…In prior studies, Paul et al and Arkin et al focused on directly grounding natural language onto the phenomena and objects in the real world [10], [11]. We extend SDC to make more accurate for grounding natural language onto the real-world, and to accept various expressions in verbal instructions for parking a car.…”
Section: Extended Sdcmentioning
confidence: 99%