Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.373
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Commonsense Question Answering with Self-Talk

Abstract: Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
160
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 126 publications
(183 citation statements)
references
References 37 publications
(47 reference statements)
2
160
0
Order By: Relevance
“…Our work fits into a body of work that probes language model's knowledge via prompts. Previous works have used manually defined prompts to study an LM's ability to perform: commonsense reasoning (Trinh and Le, 2018;Kwon et al, 2019;Shwartz et al, 2020), question answering , fact recall (Petroni et al, 2019;Jiang et al, 2020;Bouraoui et al, 2019), summarization (Radford et al, 2019), and other supervised tasks (Brown et al, 2020). Schick and Schütze (2020) use manually constructed prompts in conjunction with semi-supervised learning for fewshot learning.…”
Section: Relation To Other Prompting Methodsmentioning
confidence: 99%
“…Our work fits into a body of work that probes language model's knowledge via prompts. Previous works have used manually defined prompts to study an LM's ability to perform: commonsense reasoning (Trinh and Le, 2018;Kwon et al, 2019;Shwartz et al, 2020), question answering , fact recall (Petroni et al, 2019;Jiang et al, 2020;Bouraoui et al, 2019), summarization (Radford et al, 2019), and other supervised tasks (Brown et al, 2020). Schick and Schütze (2020) use manually constructed prompts in conjunction with semi-supervised learning for fewshot learning.…”
Section: Relation To Other Prompting Methodsmentioning
confidence: 99%
“…Following Shwartz et al (2020b), we compute the cross entropy loss of each statement, and predict the candidate answer associated with the statement with the lowest loss (most plausible statement). Figure 2 (a) Accuracy score and the average rank of gold color.…”
Section: Gptmentioning
confidence: 99%
“…On the other hand, Logan et al (2019) have shown that LMs are limited in their ability to generate accurate factual knowledge, and Kassner and Schütze (2020) and Ettinger (2020) pointed out that LMs are not sensitive to negation, resulting in generating incorrect facts ("birds can't fly"). Finally, Shwartz et al (2020b) showed that despite being noisy, knowledge generated by LMs can be used to improve performance on commonsense tasks.…”
Section: Related Workmentioning
confidence: 99%
“…It is interesting to note that for QASC, we can reduce the problem from an eight-way to a four-way classification, as our top-4 accuracy on QASC is above 92%. Our unsupervised model outperforms previous approaches, such as Self-Talk (Shwartz et al, 2020). It approaches prior supervised approaches like BIDAF (Seo et al, 2017), and even surpasses it on two tasks.…”
Section: Unsupervised Question Answeringmentioning
confidence: 55%
“…Recent work (Fabbri et al, 2020; try to generate a synthetic dataset using a text corpus such as Wikipedia, to solve extractive QA. Other works Shwartz et al, 2020) use large pre-trained generative language models such as GPT-2 (Radford et al, 2019) to generate knowledge, questions, and answers and compare against the given answer choices.…”
Section: Introductionmentioning
confidence: 99%