Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.609
|View full text |Cite
|
Sign up to set email alerts
|

ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning

Abstract: Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context. Discriminative tasks are limiting because they fail to adequately evaluate the model's ability to reason and explain predictions with underlying commonsense knowledge. They also allow such models to use reasoning shortcuts and not be "right for the right reasons". In this work, we present EX-PLAGRAPHS, a new generative and structured commonsense-reasoning task (and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
30
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(30 citation statements)
references
References 63 publications
0
30
0
Order By: Relevance
“…Representative works on graph generation from language models include knowledge graph completion models like Comet Hwang et al, 2021) that fine-tune GPT (Radford et al, 2019;Brown et al, 2020) and BART (Lewis et al, 2020), generation of event influence graphs (Tandon et al, 2019;Madaan et al, 2020), partially ordered scripts (Sakaguchi et al, 2021), temporal graphs (Madaan and Yang, 2021), entailment trees , proof graphs (Saha et al, 2020;Saha et al, 2021a) and commonsense explanation graphs (Saha et al, 2021b). Linguistic tasks like syntactic parsing Mohammadshahi and Henderson, 2021;Kondratyuk and Straka, 2019) and semantic parsing (Chen et al, 2020b;Shin et al, 2021) have also made use of language models.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Representative works on graph generation from language models include knowledge graph completion models like Comet Hwang et al, 2021) that fine-tune GPT (Radford et al, 2019;Brown et al, 2020) and BART (Lewis et al, 2020), generation of event influence graphs (Tandon et al, 2019;Madaan et al, 2020), partially ordered scripts (Sakaguchi et al, 2021), temporal graphs (Madaan and Yang, 2021), entailment trees , proof graphs (Saha et al, 2020;Saha et al, 2021a) and commonsense explanation graphs (Saha et al, 2021b). Linguistic tasks like syntactic parsing Mohammadshahi and Henderson, 2021;Kondratyuk and Straka, 2019) and semantic parsing (Chen et al, 2020b;Shin et al, 2021) have also made use of language models.…”
Section: Related Workmentioning
confidence: 99%
“…Generative Commonsense Reasoning. While traditional commonsense reasoning tasks are discriminative in nature (Zellers et al, 2018;Talmor et al, 2019;Bisk et al, 2020;Sakaguchi et al, 2020;Talmor et al, 2021), recent focus on generative evaluation have led to the development of tasks and benchmarks that explore unstructured commonsense sentence generation (Lin et al, 2020), event influence graph generation (Madaan et al, 2020), commonsense explanation graph generation (Saha et al, 2021b), etc. We experiment with two graph generation tasks, primarily focusing on ExplaGraphs (Saha et al, 2021b) because of the clear distinction in the underlying structural constraints and the semantic aspect dealing with commonsense.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations