2019
DOI: 10.1609/aaai.v33i01.33016859
|View full text |Cite
|
Sign up to set email alerts
|

Combining Fact Extraction and Verification with Neural Semantic Matching Networks

Abstract: The increasing concern with misinformation has stimulated research efforts on automatic fact checking. The recentlyreleased FEVER dataset introduced a benchmark factverification task in which a system is asked to verify a claim using evidential sentences from Wikipedia documents. In this paper, we present a connected system consisting of three homogeneous neural semantic matching models that conduct document retrieval, sentence selection, and claim verification jointly for fact extraction and verification. For… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
274
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 209 publications
(277 citation statements)
references
References 22 publications
3
274
0
Order By: Relevance
“…The system procedure is listed below: (1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method 2 to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS (Chen et al, 2017;Nie et al, 2019). The focus of this step is to efficiently select a candidate set P I that can cover the information as much as possible (P I ⊂ K) while keeping the size of the set acceptable enough for downstream processing.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The system procedure is listed below: (1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method 2 to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS (Chen et al, 2017;Nie et al, 2019). The focus of this step is to efficiently select a candidate set P I that can cover the information as much as possible (P I ⊂ K) while keeping the size of the set acceptable enough for downstream processing.…”
Section: Methodsmentioning
confidence: 99%
“…To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS. Automatic Fact Checking: Recent work formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER stimulates many recent developments (Nie et al, 2019;Yoneda et al, 2018;Hanselowski et al, 2018) on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.…”
Section: Related Workmentioning
confidence: 99%
“…They rank pages by logistic regression and extra features like capitalization, sentence position and token matching. Keyword matching along with page-view statistics are used in (Nie et al, 2019). UKP-Athene (Hanselowski et al, 2018), the highest document retrieval scoring team, uses MediaWiki API 1 to search the Wikipedia database for the claims noun phrases.…”
Section: Document Retrievalmentioning
confidence: 99%
“…The Papelo team (Malon, 2018) employs transformer networks with pre-trained weights (Radford et al, 2018). ESIM has been widely used among the FEVER challenge participants (Nie et al, 2019;Yoneda et al, 2018;Hanselowski et al, 2018). UNC (Nie et al, 2019), the winner of the competition, proposes a modified ESIM that takes the concatenation of the retrieved evidence sentences and claim along with ELMo embedding and three additional token-level features: Word-Net, number embedding, and semantic relatedness score from the document retrieval and sentence retrieval steps.…”
Section: Claim Verificationmentioning
confidence: 99%
See 1 more Smart Citation