Proceedings of the 24th ACM International on Conference on Information and Knowledge Management 2015
DOI: 10.1145/2806416.2806652
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Check-worthy Factual Claims in Presidential Debates

Abstract: Public figures such as politicians make claims about "facts" all the time. Journalists and citizens spend a good amount of time checking the veracity of such claims. Toward automatic fact checking, we developed tools to find check-worthy factual claims from natural language sentences. Specifically, we prepared a U.S. presidential debate dataset and built classification models to distinguish check-worthy factual claims from non-factual claims and unimportant factual claims. We also identified the most-effective… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
116
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 111 publications
(116 citation statements)
references
References 11 publications
0
116
0
Order By: Relevance
“…The choice of claims to factcheck is a task in its own right, as shown by Hassan et al (2015). Finally, the only other use of data from the Emergent project is by Liu et al (2015); however their focus was not on the NLP aspects of the task but on using Twitter data to assess the veracity of the claim, ignoring the articles and their stances curated by the journalists.…”
Section: Related Workmentioning
confidence: 99%
“…The choice of claims to factcheck is a task in its own right, as shown by Hassan et al (2015). Finally, the only other use of data from the Emergent project is by Liu et al (2015); however their focus was not on the NLP aspects of the task but on using Twitter data to assess the veracity of the claim, ignoring the articles and their stances curated by the journalists.…”
Section: Related Workmentioning
confidence: 99%
“…ClaimBuster [31,32] (whose functionalities we describe below) is quite complete; a recent vision paper [48] points toward such a comparable architecture, and FullFact also states they are working to develop a complete platform [3]. The CJ Workbench [59] is another example of grass-root initiative to automate journalistic work.…”
Section: State Of the Artmentioning
confidence: 99%
“…For instance, [43] presents a corpus where claims have been manually classified as verifiable and unverifiable, and [29] uses different kinds of neural networks to learn a classification model. Support Vector Machines is used in ClaimBuster [31,32] which monitors data sources such as social media, TV programs and websites, analyses the incoming stream of information and classifies claims in three categories: non-factual (e.g., opinions or subjective content); factual but not interesting (consensual, general); factual and interesting (that is, check-worthy). To train the classifier, a database of 20,000 claims annotated with these categories has been built through crowdsourcing.…”
Section: Claim Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…Existing fact checking systems are capable of detecting fact-check-worthy claims in text (Hassan et al, 2015b), returning semantically similar textual claims (Walenz et al, 2014); and scoring the truth of triples on a knowledge graph through semantic distance (Ciampaglia et al, 2015). However, neither of these are suitable for fact checking a claim made in natural language against a database.…”
Section: Introductionmentioning
confidence: 99%