Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1007
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Parsing with Dual Learning

Abstract: Semantic parsing converts natural language queries into structured logical forms. The paucity of annotated training samples is a fundamental challenge in this field. In this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data (labeled and even unlabeled) through a dual-learning game. This game between a primal model (semantic parsing) and a dual model (logical form to query) forces them to regularize each other, and can achiev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 57 publications
(39 citation statements)
references
References 48 publications
0
37
0
Order By: Relevance
“…As shown in the "syntrain" row of Table 3, retraining the model on the combination of this data and the supervised data leads to overfitting in the training environments. A method related to data-augmentation is jointly supervising the model using the training data in the reverse direction, for example by generating utterance from query (Fried et al, 2018;Cao et al, 2019). For Spider, we find that this dual objective (57.…”
Section: Resultsmentioning
confidence: 84%
“…As shown in the "syntrain" row of Table 3, retraining the model on the combination of this data and the supervised data leads to overfitting in the training environments. A method related to data-augmentation is jointly supervising the model using the training data in the reverse direction, for example by generating utterance from query (Fried et al, 2018;Cao et al, 2019). For Spider, we find that this dual objective (57.…”
Section: Resultsmentioning
confidence: 84%
“…It will be a future work to incorporate relations among dialogue states, utterances and domain schemata. To further mitigate the data sparsity problem of multi-domain DST, it would be also interesting to incorporate data augmentations (Zhao et al, 2019) and semi-supervised learnings (Lan et al, 2018;Cao et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…They are either synthetic-only (Marzoev et al, 2020) or use human data from other Overnight domains (Herzig and Berant, 2018b). For reference, we also include two of the best-performing models that use in-domain human data (Cao et al, 2019;Chen et al, 2018) 4 .…”
Section: Applying Autoqa To Overnightmentioning
confidence: 99%
“…Numbers are copied from the cited papers. We report the numbers for the BL-Att model ofDamonte et al (2019), Att+Dual+LF ofCao et al (2019), ZEROSHOT model of…”
mentioning
confidence: 99%