Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data 2020
DOI: 10.1145/3318464.3380589
|View full text |Cite
|
Sign up to set email alerts
|

DBPal: A Fully Pluggable NL2SQL Training Pipeline

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 28 publications
(16 citation statements)
references
References 30 publications
0
12
0
Order By: Relevance
“…Bootstrapping a Semantic Parser. One line of prior work on quickly bootstrapping a semantic parser has focused on creating synthetic training examples from a grammar developed by hand (Campagna et al, 2019;Weir et al, 2020;Marzoev et al, 2020;Campagna et al, 2020) or derived automatically from existing data (Jia and Liang, 2016;. Wang et al (2015) described an approach to bootstrapping that uses a grammar to generate canonical forms, which are paraphrased by crowdworkers to produce training data "overnight."…”
Section: Introductionmentioning
confidence: 99%
“…Bootstrapping a Semantic Parser. One line of prior work on quickly bootstrapping a semantic parser has focused on creating synthetic training examples from a grammar developed by hand (Campagna et al, 2019;Weir et al, 2020;Marzoev et al, 2020;Campagna et al, 2020) or derived automatically from existing data (Jia and Liang, 2016;. Wang et al (2015) described an approach to bootstrapping that uses a grammar to generate canonical forms, which are paraphrased by crowdworkers to produce training data "overnight."…”
Section: Introductionmentioning
confidence: 99%
“…Salvatore et al (2019) focus on textual entailment and probes models with synthetic examples. In semantic parsing Wang et al (2015); Iyer et al (2017); Weir et al (2020) use templates to augment the training data for text-to-SQL tasks and Geva et al (2020) do so to improve numerical reasoning, as do on tabular data. They also create minimal contrastive examples (Kaushik et al, 2020;Gardner et al, 2020) by automatically swapping entities in the statements by plausible alternatives that exists elsewhere in the table.…”
Section: Synthetic Datamentioning
confidence: 99%
“…Both works rely on manual paraphrases and hand-tuned annotations on each database attribute. Training with synthetic data has also been explored to complement existed dataset (Weir et al, 2020) and in the few-shot setting .…”
Section: Related Workmentioning
confidence: 99%