Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.608
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing

Abstract: One daunting problem for semantic parsing is the scarcity of annotation. Aiming to reduce nontrivial human labor, we propose a two-stage semantic parsing framework, where the first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into the canonical utterance. The downstream naive semantic parser accepts the intermediate output and returns the target logical form. Furthermore, the entire training process is split into two phases: pre-training and cycle learning.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 34 publications
0
21
0
Order By: Relevance
“…For many Natural Language Processing (NLP) tasks, there exist many primal and dual tasks, such as open information narration (OIN) and open information extraction (OIE) (Sun et al, 2018), natural language understanding (NLU) and natural language generation (NLG) , semantic parsing and natural language generation (Ye et al, 2019;Cao et al, 2019Cao et al, , 2020, link prediction and entailment graph induction (Cao et al, 2019), query-to-response and response-to-query generation (Shen and Feng, 2020) and so on. The duality between the primal task and the dual task is considered as a constraint that both problems must share the same joint probability mutually.…”
Section: B1 Dual Learningmentioning
confidence: 99%
“…For many Natural Language Processing (NLP) tasks, there exist many primal and dual tasks, such as open information narration (OIN) and open information extraction (OIE) (Sun et al, 2018), natural language understanding (NLU) and natural language generation (NLG) , semantic parsing and natural language generation (Ye et al, 2019;Cao et al, 2019Cao et al, , 2020, link prediction and entailment graph induction (Cao et al, 2019), query-to-response and response-to-query generation (Shen and Feng, 2020) and so on. The duality between the primal task and the dual task is considered as a constraint that both problems must share the same joint probability mutually.…”
Section: B1 Dual Learningmentioning
confidence: 99%
“…The abundance of high-quality labeled data is critical for effectively training supervised models [ 31 , 51 ]. However, manually annotating NL utterances with their corresponding SC is expensive, cumbersome, and time consuming [ 10 , 38 , 52 , 53 ]. While emerging datasets have sprung up, due to the limitations in quantity, quality, and domain-crossing, the limited availability of labeled data is still becoming the primary bottleneck for data-driven supervised models [ 49 ].…”
Section: Insights Gained From Nlscg Research Backlogmentioning
confidence: 99%
“…Genres based on search concepts, including combining retrieval methods into neural code generation models [ 18 , 46 ], reranking a list of potential candidates [ 6 ], etc., are relatively feasible. Besides, data augmentation [ 45 , 63 ] processing based on training datasets , as well as the two-stage [ 52 ] and sketch [ 24 ]-based generation tactics, are also worthy of consideration.…”
Section: Perspectives On Nlscg Latest Technology Landscapementioning
confidence: 99%
See 1 more Smart Citation
“…The dual learning mechanism enables a pair of dual systems to automatically learn from unlabeled data through a closed-loop game. The idea of dual learning has been applied into various tasks, such as Question Answer (Tang et al, 2017)/Generation (Tang et al, 2018), Image-to-Image Translation (Yi et al, 2017), Open-domain Information Extraction/Narration , Text Simplification , Semantic Parsing (Cao et al, 2019;Cao et al, 2020) and dialogue state tracking (Chen et al, 2020c).…”
Section: End-to-endmentioning
confidence: 99%