Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016
DOI: 10.18653/v1/p16-1002
|View full text |Cite
|
Sign up to set email alerts
|

Data Recombination for Neural Semantic Parsing

Abstract: Modeling crisp logical regularities is crucial in semantic parsing, making it difficult for neural models with no task-specific prior knowledge to achieve good results. In this paper, we introduce data recombination, a novel framework for injecting such prior knowledge into a model. From the training data, we induce a highprecision synchronous context-free grammar, which captures important conditional independence properties commonly found in semantic parsing. We then train a sequence-to-sequence recurrent net… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
538
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 411 publications
(556 citation statements)
references
References 18 publications
4
538
0
1
Order By: Relevance
“…This is because the model must interact with a symbolic executor through non-differentiable operations to search over a large program space. In semantic parsing, recent work handled this (Dong and Lapata, 2016;Jia and Liang, 2016) by training from manually annotated programs and avoiding program execution at training time. However, annotating programs is known to be expensive and scales poorly.…”
Section: Introductionmentioning
confidence: 99%
“…This is because the model must interact with a symbolic executor through non-differentiable operations to search over a large program space. In semantic parsing, recent work handled this (Dong and Lapata, 2016;Jia and Liang, 2016) by training from manually annotated programs and avoiding program execution at training time. However, annotating programs is known to be expensive and scales poorly.…”
Section: Introductionmentioning
confidence: 99%
“…A variety of solutions have been developed to address infrequent or out-of-vocabulary words in particular (Gülçehre et al, 2016;Jia and Liang, 2016). Instead of directly copying input words or deterministically selecting output, our model can learn how to generate them (e.g., it might prefer to produce the word "steaks" when the original recipe ingredient was "ribeyes").…”
Section: Related Workmentioning
confidence: 99%
“…Another approach is to treat semantic parsing as a machine translation problem, where the logical form is linearized then predicted as an unstructured sequence of tokens (Andreas et al, 2013). This approach is taken by recent neural semantic parsers (Jia and Liang, 2016;Dong and Lapata, 2016;Locascio et al, 2016;Ling et al, 2016). This approach has the advantage of predicting the logical form directly from the question without latent variables, which simplifies learning, but the disadvantage of ignoring type constraints on logical forms.…”
Section: Related Workmentioning
confidence: 99%
“…A typical semantic parsing task is question answering against a database, which is accomplished by translating questions into executable logical forms (i.e., programs) that output their answers. Recent work has shown that recurrent neural networks can be used for semantic parsing by encoding the question then predicting each token of the logical form in sequence (Jia and Liang, 2016;Dong and Lapata, 2016). These approaches, while effective, have two major limitations.…”
Section: Introductionmentioning
confidence: 99%