2020
DOI: 10.1007/978-3-030-45442-5_65
|View full text |Cite
|
Sign up to set email alerts
|

CheckThat! at CLEF 2020: Enabling the Automatic Identification and Verification of Claims in Social Media

Abstract: We describe the third edition of the CheckThat! Lab, which is part of the 2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four complementary tasks and a related task from previous lab editions, offered in English, Arabic, and Spanish. Task 1 asks to predict which tweets in a Twitter stream are worth fact-checking. Task 2 asks to determine whether a claim posted in a tweet can be verified using a set of previously fact-checked claims. Task 3 asks to retrieve text snippets from a given set of We… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 29 publications
(29 citation statements)
references
References 20 publications
0
29
0
Order By: Relevance
“…A similar approach was adopted for a related task, e.g., it was used to obtain annotated training and testing data for the Check-Worthiness task of the CLEF Check-That! Lab (Atanasova et al, 2018Barrón-Cedeño et al, 2020).…”
mentioning
confidence: 99%
“…A similar approach was adopted for a related task, e.g., it was used to obtain annotated training and testing data for the Check-Worthiness task of the CLEF Check-That! Lab (Atanasova et al, 2018Barrón-Cedeño et al, 2020).…”
mentioning
confidence: 99%
“…This has encouraged the development of AI solutions, e.g., as part of shared tasks such as the CLEF CheckThat! lab 2018-2021 Elsayed et al, 2019;Barrón-Cedeño et al, 2020;Nakov et al, 2021a], and inside dedicated fact-checking organizations such as Full Fact. 12 The problem is widely tackled as a ranking one, where the system has to produce a check-worthiness scores.…”
Section: Finding Claims Worth Fact-checkingmentioning
confidence: 99%
“…The task was also featured in the CLEF CheckThat! Lab [Barrón-Cedeño et al, 2020;Nakov et al, 2021a]. Vo and Lee [2020] explored a multi-modal setup, where tweets with claims about images were matched against the Fauxtography section of Snopes.…”
Section: Detecting Previously Fact-checked Claimsmentioning
confidence: 99%
“…Political Speeches For political speeches, the most studied datasets come from the Clef Check-That! shared tasks (Atanasova et al, 2018;Elsayed et al, 2019;Barrón-Cedeño et al, 2020) and ClaimRank (Jaradat et al, 2018). The data consist of transcripts of political debates and speeches where each sentence has been annotated by an independent news or fact-checking organization for whether or not the statement should be checked for veracity.…”
Section: Claim Check-worthiness Detectionmentioning
confidence: 99%
“…There are multiple isolated lines of research which have studied variations of this problem. Figure 1 provides examples from three tasks which are studied in this work: rumour detection on Twitter (Zubiaga et al, 2016, check-worthiness ranking in political debates and speeches (Atanasova et al, 2018;Elsayed et al, 2019;Barrón-Cedeño et al, 2020), and citation needed detection on Wikipedia (Redi et al, 2019). Each task is concerned with a shared underlying problem: detecting claims which war-rant further verification.…”
Section: Introductionmentioning
confidence: 99%