Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) 2022
DOI: 10.18653/v1/2022.acl-short.17
|View full text |Cite
|
Sign up to set email alerts
|

The Power of Prompt Tuning for Low-Resource Semantic Parsing

Abstract: Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing-the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Existing prompt tuning methods attempt to find the optimal prompts after adding prompts to the input to adapt a fixed language model to a specific task. However, the performance of prompt tuning depends primarily on the frequency of the terms in the pretraining corpus [16]. We introduce a domain-specific prompt tuning method that adapts the frozen language model to data distribution in the downstream domain to address this issue.…”
Section: Methodsmentioning
confidence: 99%
“…Existing prompt tuning methods attempt to find the optimal prompts after adding prompts to the input to adapt a fixed language model to a specific task. However, the performance of prompt tuning depends primarily on the frequency of the terms in the pretraining corpus [16]. We introduce a domain-specific prompt tuning method that adapts the frozen language model to data distribution in the downstream domain to address this issue.…”
Section: Methodsmentioning
confidence: 99%
“…LLMs are increasingly used for semantic parsing in low-data scenarios utilizing canonical representations (Shin et al, 2021;Yang et al, 2022), and prompt-tuning (Schucher et al, 2022;Drozdov et al, 2022). The closest work to ours is Zhao et al (2022) where they decompose parsing into QA tasks.…”
Section: Related Workmentioning
confidence: 99%
“…Our paper tackles multi-aspect few-shot learning where the target is a structured output containing multiple attributes, and the model only has a few examples from which to learn such a pattern. Given the complexity of the task, previous research trained custom models for each dataset (Zheng et al, 2022;Schucher et al, 2022). Instead, we leverage the power of LLMs (Brown et al, 2020;Chung et al, 2022) to design a more general solution through controlled data synthesis.…”
Section: Related Workmentioning
confidence: 99%