Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER) 2020
DOI: 10.18653/v1/2020.fever-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Distilling the Evidence to Augment Fact Verification Models

Abstract: The alarming spread of fake news in social media, together with the impossibility of scaling manual fact verification, motivated the development of natural language processing techniques to automatically verify the veracity of claims. Most approaches perform a claimevidence classification without providing any insights about why the claim is trustworthy or not. We propose, instead, a model-agnostic framework that consists of two modules: (1) a span extractor, which identifies the crucial information connecting… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…This, however, does not remove the threats of echo chambers and misinformation. As future work, we plan to add a new module based on our previous work [ 54 ] to better analyze phenomena related to the spread of misinformation.…”
Section: Discussionmentioning
confidence: 99%
“…This, however, does not remove the threats of echo chambers and misinformation. As future work, we plan to add a new module based on our previous work [ 54 ] to better analyze phenomena related to the spread of misinformation.…”
Section: Discussionmentioning
confidence: 99%
“…The rest of the documents (i.e., those with disambiguative information) are ranked and filtered out using NSNM and a threshold value. Several works (Ma et al, 2019;Nie et al, 2019b;Zhong et al, 2020;Portelli et al, 2020) 2018) aims for high precision using exact matching techniques. In addition, as we observe in Table 2, most of the works that have been developed for the competition shared task (2018) focus on hand-crafted features.…”
Section: Keyword-based Methodsmentioning
confidence: 99%
“…For the sentence retrieval task, several pipeline methods in the literature rely on the sentence retrieval component of the baseline method (Thorne et al, 2018a). Specifically, these methods (Chernyavskiy and Ilvovsky, 2019;Portelli et al, 2020;Taniguchi et al, 2018;Yin and Schütze, 2018a), use a TF-IDF vector representation along with a cosine similarity function (see Section 2.4 for a detailed description). However, there are some attempts that exploit additional representations such as the ELMo embeddings (Chakrabarty et al, 2018).…”
Section: Tf-idfmentioning
confidence: 99%
“…Pre-trained BERT models have often been used for classification (supported, refuted, and not enough info). For claim verification, BERT-based models are prevalent (Soleimani et al, 2019;Portelli et al, 2020;Chernyavskiy and Ilvovsky, 2019;Nie et al, 2019;Tokala et al, 2019), while Longformer has been used for verdict prediction (Wadden et al, 2020b;Wright et al, 2022).…”
Section: Related Workmentioning
confidence: 99%