Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.231
|View full text |Cite
|
Sign up to set email alerts
|

Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning

Abstract: When pre-trained contextualized embeddingbased models developed for unstructured data are adapted for structured tabular data, they perform admirably. However, recent probing studies show that these models use spurious correlations, and often predict inference labels by focusing on false evidence or ignoring it altogether. To study this issue, we introduce the task of Trustworthy Tabular Reasoning, where a model needs to extract evidence to be used for reasoning, in addition to predicting the label. As a case … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 44 publications
0
0
0
Order By: Relevance
“…Neural models in NLP can be especially vulnerable to adversarial attacks, and it is important to have solid mechanisms for mitigating such attacks, while maintaining good task performance. Based on the intuition that automatically generated adversarial inputs can be undone by learning to manipulate the textual input instead of retraining the classification model, we formulate a mechanism with the valuable property of transferability of defense, which allows the underlying classification model to be deployed to new and unknown models without retraining (Gupta et al 2023). The assumption is that a single shared model can be more robust than the individual ones, while also reducing overhead when deploying it into new models.…”
Section: Keeping the Training Data Intactmentioning
confidence: 99%
“…Neural models in NLP can be especially vulnerable to adversarial attacks, and it is important to have solid mechanisms for mitigating such attacks, while maintaining good task performance. Based on the intuition that automatically generated adversarial inputs can be undone by learning to manipulate the textual input instead of retraining the classification model, we formulate a mechanism with the valuable property of transferability of defense, which allows the underlying classification model to be deployed to new and unknown models without retraining (Gupta et al 2023). The assumption is that a single shared model can be more robust than the individual ones, while also reducing overhead when deploying it into new models.…”
Section: Keeping the Training Data Intactmentioning
confidence: 99%
“…Systematic Probes for Tables. Tables have been utilized previously used to create probes for table grounding (Gupta et al, 2022b) or recasting non-NLI datasets (e.g. question-answering) to NLI (Jena et al, 2022).…”
Section: Related Workmentioning
confidence: 99%