Findings of the Association for Computational Linguistics: NAACL 2022 2022
DOI: 10.18653/v1/2022.findings-naacl.31
|View full text |Cite
|
Sign up to set email alerts
|

Few-Shot Self-Rationalization with Natural Language Prompts

Abstract: Self-rationalization models that predict task labels and generate free-text elaborations for their predictions could enable more intuitive interaction with NLP systems. These models are, however, currently trained with a large amount of human-written free-text explanations for each task which hinders their broader usage. We propose to study a more realistic setting of self-rationalization using few training examples. We present FEB-a standardized collection of four existing English-language datasets and associ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(26 citation statements)
references
References 42 publications
0
10
0
Order By: Relevance
“…UnifiedQA-v2-11B is also better than FLAN-T5-XXL (a 11Bparameter model as well). Given that UnifiedQA-v1 (Khashabi et al, 2020) has been effective for tasks beyond QA (Bragg et al, 2021;Marasović et al, 2022) models are strong but overlooked baselines in recent works on large-scale models.…”
Section: Resultsmentioning
confidence: 99%
“…UnifiedQA-v2-11B is also better than FLAN-T5-XXL (a 11Bparameter model as well). Given that UnifiedQA-v1 (Khashabi et al, 2020) has been effective for tasks beyond QA (Bragg et al, 2021;Marasović et al, 2022) models are strong but overlooked baselines in recent works on large-scale models.…”
Section: Resultsmentioning
confidence: 99%
“…Large language models like GPT-3 [7] have popularized few-shot prompting in natural language processing (NLP), where several inputoutput pairs are used as context for the language model to understand the task and generate predictions for a new example. Some prompting techniques [16,47,60,73] have also been developed for more effective or transparent reasoning in NLP. Later, prompting was brought to the vision community [24,31,34,59,74,75].…”
Section: Related Workmentioning
confidence: 99%
“…Learning From In-Context Instructions. The few-shot performance of LMs can be enhanced by learning from in-context instructions (Sanh et al, 2021;Liu et al, 2021b), in the forms of task descriptions (Mishra et al, 2021;Raffel et al, 2019), answer demonstrations (Brown et al, 2020), targeting formats (Marasović et al, 2021), etc., which can be positioned before or even after (Lampinen et al, 2022) the answer. Recent studies have shown improved results by including decomposed reasoning steps into the instructions (Nye et al, 2021;Narang et al, 2020).…”
Section: Related Workmentioning
confidence: 99%