Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1188
|View full text |Cite
|
Sign up to set email alerts
|

Question Generation from SQL Queries Improves Neural Semantic Parsing

Abstract: We study how to learn a semantic parser of state-of-the-art accuracy with less supervised training data. We conduct our study on WikiSQL, the largest hand-annotated semantic parsing dataset to date. First, we demonstrate that question generation is an effective method that empowers us to learn a state-ofthe-art neural network based semantic parser with thirty percent of the supervised training data. Second, we show that applying question generation to the full supervised training data further improves the stat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
38
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 45 publications
(40 citation statements)
references
References 42 publications
1
38
1
Order By: Relevance
“…We perform a PSEUDO baseline following the setup in Sennrich et al (2016) and Guo et al (2018). The pre-trained LF 2Q or Q2LF model is used to generate pseudo query, logical f orm pairs from unlabeled logical forms or unlabeled queries, which extends the training set.…”
Section: Results and Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…We perform a PSEUDO baseline following the setup in Sennrich et al (2016) and Guo et al (2018). The pre-trained LF 2Q or Q2LF model is used to generate pseudo query, logical f orm pairs from unlabeled logical forms or unlabeled queries, which extends the training set.…”
Section: Results and Analysismentioning
confidence: 99%
“…Su and Yan (2017) use multiple source domains to reduce the cost of collecting data for the target domain. Guo et al (2018) pre-train a question generation model to produce pseudo-labeled data as a supplement.…”
Section: Related Workmentioning
confidence: 99%
“…Such models were first introduced for machine translation, that is, for mapping a sentence in one natural language to the corresponding sentences in another (Bahdanau, Cho, & Bengio, 2015; Luong, Pham, & Manning, 2015). With some changes, this neural architecture has been extensively adapted for semantic parsing (Dong & Lapata, 2016; Guo et al, 2018; Jia & Liang, 2016; Sun et al, 2018; X. Xu, Liu, & Song, 2017; Zhong, Xiong, & Socher, 2017) and more specifically for the KGQA task (Cheng & Lapata, 2018; Guo, Tang, Duan, Zhou, & Yin, 2018; He & Golub, 2016; C. Liang et al, 2017; Yin, Zhou, He, & Neubig, 2018).…”
Section: Neural Network‐based Kgqa Systemsmentioning
confidence: 99%
“…Many semantic parsing approaches rely on a decoder that generates production rules or more generally, actions, rather than logical form tokens (Cheng & Lapata, 2018; Guo, Sun, et al, 2018; Guo, Tang, et al, 2018; Lin, Bogin, Neumann, Berant, & Gardner, 2019; Rabinovich, Stern, & Klein, 2017; Shen et al, 2019; Yin & Neubig, 2017; Yin & Neubig, 2018; T. Yu et al, 2018). For example, suppose we would like to decode a query argmax (Astronauts, height).…”
Section: Neural Network‐based Kgqa Systemsmentioning
confidence: 99%
“…Ke et al (2018) train models to ask questions in open-domain conversational systems to better interact with people. Guo et al (2018) develop a sequence-to-sequence model to generate natural language questions.…”
Section: Question Generationmentioning
confidence: 99%