Robotics: Science and Systems XIV 2018
DOI: 10.15607/rss.2018.xiv.067
|View full text |Cite
|
Sign up to set email alerts
|

Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications

Abstract: Abstract-Often times, natural language commands issued to robots not only specify a particular target configuration or goal state but also outline constraints on how the robot goes about its execution. That is, the path taken to achieving some goal state is given equal importance to the goal state itself. One example of this could be instructing a wheeled robot to "go to the living room but avoid the kitchen," in order to avoid scuffing the floor. This class of behaviors poses a serious obstacle to existing la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 36 publications
(30 citation statements)
references
References 35 publications
0
28
0
Order By: Relevance
“…At a high level, the key to the symbol grounding problem lies in how object, spatial relation, and attribute classifiers, and primitive robot actions, can be linked to the semantic representations of sentences. Many approaches exist, including attributed relational graph matching [5], defining robot actions in terms of goal states or action sequences [5,16], probabilistic graphical models such as conditional random fields and hierarchical adaptive distributed correspondence graphs [41,48,49], and active and interactive learning to learn new words and classifiers [25,50,51].…”
Section: Symbol Grounding In Human-robot Interactionmentioning
confidence: 99%
“…At a high level, the key to the symbol grounding problem lies in how object, spatial relation, and attribute classifiers, and primitive robot actions, can be linked to the semantic representations of sentences. Many approaches exist, including attributed relational graph matching [5], defining robot actions in terms of goal states or action sequences [5,16], probabilistic graphical models such as conditional random fields and hierarchical adaptive distributed correspondence graphs [41,48,49], and active and interactive learning to learn new words and classifiers [25,50,51].…”
Section: Symbol Grounding In Human-robot Interactionmentioning
confidence: 99%
“…Previous approaches have attempted to solve this problem using a single deep neural network architecture trained using reinforcement learning (Anderson et al 2018;Blukis et al 2019;Chaplot et al 2018). Another approach to solve this problem is to translate natural language into a sequence of symbols that can then be given as input to a planner (Gopalan et al 2018). It is possible to learn these symbols directly from data (Gopalan et al 2020), and compose these symbols using semantic parsing (Dzifcak et al 2009;Williams et al 2018) to create novel task specifications that can then be planned over.…”
Section: Related Workmentioning
confidence: 99%
“…Prior work [4], [6], [7] uses supervised learning to train Seq2Seq models to output grounded LTL task specifications from natural language. Gopalan et al [7] discussed challenges relating to such language model generalization, and introduced and demonstrated the ability for Seq2Seq models to ground natural language to geometric LTL, utilizing the framework of grounding natural language to reward functions introduced by MacGlashan et al [14], where reward functions task specifications were expressed as conjunctions of propositional functions learned from demonstration. Berg et al [6] focused on the generalization problem of grounding unseen language to LTL by applying CopyNet [10] , a Seq2Seq model with a copying mechanism, to copy out-ofvocabulary words present in the input command to the output LTL.…”
Section: Related Workmentioning
confidence: 99%
“…The exception to this was MacGlashan et al [14], where the focus was on representing atemporal task specifications. Our work builds off of the work of Gopalan et al [7], MacGlashan et al [14], and Berg et al [6] in that we employ propositional functions to generate atomic propositions. Our contributions differ from prior work in that we consider lifted LTL representations in order to generalize task specifications over objects, and consider these representations for both manipulation and navigation tasks.…”
Section: Related Workmentioning
confidence: 99%