Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER) 2021
DOI: 10.18653/v1/2021.fever-1.11
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Fact-Checking with Document-level Annotations using BERT and Multiple Instance Learning

Abstract: Automatic fact-checking is crucial for recognizing misinformation spreading on the internet. Most existing fact-checkers break down the process into several subtasks, one of which determines candidate evidence sentences that can potentially support or refute the claim to be verified; typically, evidence sentences with gold-standard labels are needed for this. In a more realistic setting, however, such sentence-level annotations are not available. In this paper, we tackle the natural language inference (NLI) su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Reliable references have become central to Wikipedia as it acts as an educational resource in under-resourced environments [24]. Not only are Wikipedia readers affected by the quality of content, but a large set of automated fact-checking datasets and solutions rely on it as ground-truth [7,21,42,45,47], giving the online encyclopedia a central role in the AI ecosystem.…”
Section: Related Workmentioning
confidence: 99%
“…Reliable references have become central to Wikipedia as it acts as an educational resource in under-resourced environments [24]. Not only are Wikipedia readers affected by the quality of content, but a large set of automated fact-checking datasets and solutions rely on it as ground-truth [7,21,42,45,47], giving the online encyclopedia a central role in the AI ecosystem.…”
Section: Related Workmentioning
confidence: 99%
“…Claim verification is typically addressed as an NLI problem (Thorne and Vlachos, 2018). Recent progress has enforced a closed-world reliance (Pratapa et al, 2020) and incorporated multiple instance learning (Sathe and Park, 2021). While data scarcity poses a major challenge on automated fact-checking , research on fewshot claim verification is limited to date.…”
Section: Claim Verificationmentioning
confidence: 99%
“…For datasets, various fact-checking datasets representing different real-world domains are proposed, including both naturally occurring (Augenstein et al, 2019;Gupta and Srikumar, 2021;Saakyan et al, 2021; and humancrafted (Thorne et al, 2018;Sathe et al, 2020;Schuster et al, 2021;Atanasova et al, 2022) factchecking claims. While these FV datasets focus on different domains, there is still a substantial overlap in the abilities required to verify claims across these datasets.…”
Section: Related Workmentioning
confidence: 99%