2021
DOI: 10.48550/arxiv.2110.08525
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Power of Prompt Tuning for Low-Resource Semantic Parsing

Abstract: Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language tasks. In this paper, we investigate prompt tuning for semantic parsing, the task of mapping natural language utterances onto formal meaning representations. For large T5 models we find (i) that prompt tuning significantly outperforms fine-tuning in the low data regime and (ii) that canonicalization-i.e. naturalizing the meaning representations-barely improves performance. This last result… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 16 publications
(30 reference statements)
0
6
0
Order By: Relevance
“…One is tuning-free prompting, for example, Petroni et al (2019); Shin et al (2020) used a fill-in-the-blank paradigm, while Brown et al ( 2020); Shin et al (2021) used "few-shot" prompts that included several examples of inputs followed by target outputs, with the actual task input appended at the end. One is Fixed-LM Prompt Tuning, as used by Li and Liang (2021); Schucher et al (2021); Qin and Eisner (2021); Liu et al (2021b), which requires training less parameters compared with tuning the whole model. Another is Fixed-prompt LM Tuning, which is similar to our setting.…”
Section: Compositional Generalization In Semanticmentioning
confidence: 99%
See 1 more Smart Citation
“…One is tuning-free prompting, for example, Petroni et al (2019); Shin et al (2020) used a fill-in-the-blank paradigm, while Brown et al ( 2020); Shin et al (2021) used "few-shot" prompts that included several examples of inputs followed by target outputs, with the actual task input appended at the end. One is Fixed-LM Prompt Tuning, as used by Li and Liang (2021); Schucher et al (2021); Qin and Eisner (2021); Liu et al (2021b), which requires training less parameters compared with tuning the whole model. Another is Fixed-prompt LM Tuning, which is similar to our setting.…”
Section: Compositional Generalization In Semanticmentioning
confidence: 99%
“…Considering the difference between natural and formal language, adapting LMs to semantic parsing is non-trivial. Prior works typically first finetune the LMs to generate canonical utterance, which is then transformed into the final formal language through grammars (Shin et al, 2021;Schucher et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, recent work (Shin et al, 2021;Wu et al, 2021;Yin et al, 2021;Schucher et al, 2021) has demonstrated decent performance given just a small seed dataset D, by combining pretrained language models with constrained decoding. For example, Shin et al (2021) use only 300 labeled examples in the complex SMCalFlow dialogue domain (Andreas et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…One is tuning-free prompting, for example, Petroni et al (2019); Shin et al (2020) used a fill-in-the-blank paradigm, while Brown et al ( 2020); Shin et al (2021) used "few-shot" prompts that included several examples of inputs followed by target outputs, with the actual task input appended at the end. One is Fixed-LM Prompt Tuning, as used by Li and Liang (2021); Schucher et al (2021); Qin and Eisner (2021); Liu et al (2021b), which requires training less parameters compared with tuning the whole model. Another is Fixed-prompt LM Tuning, which is similar to our setting.…”
Section: Related Workmentioning
confidence: 99%
“…Considering the difference between natural and formal language, adapting LMs to semantic parsing is non-trivial. Prior works typically first finetune the LMs to generate canonical utterance, which is then transformed into the final formal language through grammars (Shin et al, 2021;Schucher et al, 2021).…”
Section: Introductionmentioning
confidence: 99%