Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.238
|View full text |Cite
|
Sign up to set email alerts
|

Explanations for CommonsenseQA: New Dataset and Models

Abstract: CommonsenseQA (CQA) (Talmor et al., 2019) dataset was recently released to advance the research on common-sense question answering (QA) task. Whereas the prior work has mostly focused on proposing QA models for this dataset, our aim is to retrieve as well as generate explanation for a given (question, correct answer choice, incorrect answer choices) tuple from this dataset. Our explanation definition is based on certain desiderata, and translates an explanation into a set of positive and negative common-sense … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(40 citation statements)
references
References 34 publications
(43 reference statements)
1
26
0
Order By: Relevance
“…Datasets in FEB To identify available datasets suitable for few-shot self-rationalization, we start with a recent overview of datasets with free-text explanations ) and filter them according to the following criteria: (i) the input is textual, (ii) the explanation consists of one sentence or 2-3 simple sentences, (iii) the task has a fixed set of possible labels, (iv) the explanation is Training sets for all classification tasks are balanced and contain 48 instances. Sources: E-SNLI (Camburu et al, 2018), ECQA (Aggarwal et al, 2021), COMVE , SBIC (Sap et al, 2020).…”
Section: Feb Benchmarkmentioning
confidence: 99%
“…Datasets in FEB To identify available datasets suitable for few-shot self-rationalization, we start with a recent overview of datasets with free-text explanations ) and filter them according to the following criteria: (i) the input is textual, (ii) the explanation consists of one sentence or 2-3 simple sentences, (iii) the task has a fixed set of possible labels, (iv) the explanation is Training sets for all classification tasks are balanced and contain 48 instances. Sources: E-SNLI (Camburu et al, 2018), ECQA (Aggarwal et al, 2021), COMVE , SBIC (Sap et al, 2020).…”
Section: Feb Benchmarkmentioning
confidence: 99%
“…This issue could be alleviated by using model-independent criteria to categorize information content. For example, Aggarwal et al (2021) propose to quantify the information contained in a free-text explanation by calculating the number of distinct words (nouns, verbs, adjectives, and adverbs) per explanation.…”
Section: Information Contentmentioning
confidence: 99%
“…Com-monsenseQA (Talmor et al, 2019) is a multiple choice task posed over commonsense questions. Crowdsourced free-text explanations for instances in CommonsenseQA are provided in the CoS-E v1.11 (Rajani et al, 2019) and ECQA (Aggarwal et al, 2021) datasets. ECQA explanations are counterfactual, i.e., annotators were instructed to explain not only the correct answer choice but also why the others are incorrect.…”
Section: Introductionmentioning
confidence: 99%