Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1082
|View full text |Cite
|
Sign up to set email alerts
|

Coupling Retrieval and Meta-Learning for Context-Dependent Semantic Parsing

Abstract: In this paper, we present an approach to incorporate retrieved datapoints as supporting evidence for context-dependent semantic parsing, such as generating source code conditioned on the class environment. Our approach naturally combines a retrieval model and a meta-learner, where the former learns to find similar datapoints from the training data, and the latter considers retrieved datapoints as a pseudo task for fast adaptation. Specifically, our retriever is a context-aware encoderdecoder model with a laten… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(31 citation statements)
references
References 26 publications
(50 reference statements)
1
30
0
Order By: Relevance
“…Our goal is to learn a QC model, denoted as f QC θ , that retrieves the highest score code snippets for an input question: arg max c ∈{c}∪{c − } f QC θ (q, c ). Note that at testing time, the trained QC model f QC can be used to retrieve code snippets from any code bases, unlike the group of QC methods (Hayati et al, 2018;Hashimoto et al, 2018;Guo et al, 2019) relying on the availability of NL descriptions of code.…”
Section: Overviewmentioning
confidence: 99%
See 4 more Smart Citations
“…Our goal is to learn a QC model, denoted as f QC θ , that retrieves the highest score code snippets for an input question: arg max c ∈{c}∪{c − } f QC θ (q, c ). Note that at testing time, the trained QC model f QC can be used to retrieve code snippets from any code bases, unlike the group of QC methods (Hayati et al, 2018;Hashimoto et al, 2018;Guo et al, 2019) relying on the availability of NL descriptions of code.…”
Section: Overviewmentioning
confidence: 99%
“…Since there is only one question for each sample in the QC datasets, we apply a standard code summarization tool (Iyer et al, 2016) to generate code descriptions to match with input questions. • vMF-VAE (Guo et al, 2019). This is similar to EditDist, but a vMF Variational Autoencoder (Xu and Durrett, 2018) is separately trained to embed questions and code descriptions into latent vector distributions, whose distance is then measured by KL-divergence.…”
Section: Baselines and Evaluation Metricsmentioning
confidence: 99%
See 3 more Smart Citations