2021
DOI: 10.48550/arxiv.2104.08768
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Constrained Language Models Yield Few-Shot Semantic Parsers

Abstract: We explore the use of large pretrained language models as few-shot semantic parsers. The goal in semantic parsing is to generate a structured meaning representation given a natural language input. However, language models are trained to generate natural language. To bridge the gap, we use language models to paraphrase inputs into a controlled sublanguage resembling English that can be automatically mapped to a target meaning representation. With a small amount of data and very little code to convert into Engli… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 51 publications
(42 reference statements)
0
10
0
Order By: Relevance
“…The parse of each utterance is produced via few-shot prompting (Brown et al, 2020): the utterance is added to the end of the prompt, and the subsequent GPT-3 generation is interpreted as the target parse. We found that this simple parsing technique works well and could easily be applied to other parsing-based tasks, as in Shin et al (2021). The parsing prompts are reproduced in full in the Appendix.…”
Section: Integrating System 1 and Systemmentioning
confidence: 87%
“…The parse of each utterance is produced via few-shot prompting (Brown et al, 2020): the utterance is added to the end of the prompt, and the subsequent GPT-3 generation is interpreted as the target parse. We found that this simple parsing technique works well and could easily be applied to other parsing-based tasks, as in Shin et al (2021). The parsing prompts are reproduced in full in the Appendix.…”
Section: Integrating System 1 and Systemmentioning
confidence: 87%
“…Directly learning such a language model is challenging as the difference between the formal language and natural language is huge. To bridge the gap, Berant and Liang (2014); Shin et al (2021) propose Semantic Parsing via Paraphrasing (SPP) -a two-stage framework. In the first stage, they paraphrase u to its canonical utterance c using a paraphrasing language model:…”
Section: Preliminarymentioning
confidence: 99%
“…Considering the difference between natural and formal language, adapting LMs to semantic parsing is non-trivial. Prior works typically first finetune the LMs to generate canonical utterance, which is then transformed into the final formal language through grammars (Shin et al, 2021;Schucher et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…As with other NLP tasks, SOTA results in semantic parsing, including task-oriented parsing, have been obtained by fine-tuning large pretrained language models [6,14,17], particularly pretrained encoder-decoder architectures such as BART [12]. Shin et al [18] leverages the PLM's ability to generate natural language by reframing semantic parsing as a paraphrasing task. They use the PLM to map utterances into a canonical (but highly restricted) naturallanguage representation that can then be automatically mapped to the final output target.…”
Section: Related Workmentioning
confidence: 99%