2022
DOI: 10.3233/sw-212838
|View full text |Cite
|
Sign up to set email alerts
|

Beyond facts – a survey and conceptualisation of claims in online discourse analysis

Abstract: Analyzing statements of facts and claims in online discourse is subject of a multitude of research areas. Methods from natural language processing and computational linguistics help investigate issues such as the spread of biased narratives and falsehoods on the Web. Related tasks include fact-checking, stance detection and argumentation mining. Knowledge-based approaches, in particular works in knowledge base construction and augmentation, are concerned with mining, verifying and representing factual knowledg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 194 publications
(265 reference statements)
0
2
0
Order By: Relevance
“…The input claims themselves are individual sentences from the speeches, yet they sometimes refer to entities mentioned in the context rather than the claims themselves, and can depend on one another, as can be seen in Table 6. With this, the data can be seen as diverging from the task definition defining a check-worthy claim as input, as the individual queries in this dataset are not necessarily check-worthy by themselves and do not constitute individual claims, depending on the precise definition of what a claim is (see [4] for an overview of diverging definitions), which is left implicit in this task definition. In both the 2021 and 2022 datasets, the input query IDs contain the date of the debate which can be used as a feature to compare with the verified claim dates (cf.…”
Section: Political Debates Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…The input claims themselves are individual sentences from the speeches, yet they sometimes refer to entities mentioned in the context rather than the claims themselves, and can depend on one another, as can be seen in Table 6. With this, the data can be seen as diverging from the task definition defining a check-worthy claim as input, as the individual queries in this dataset are not necessarily check-worthy by themselves and do not constitute individual claims, depending on the precise definition of what a claim is (see [4] for an overview of diverging definitions), which is left implicit in this task definition. In both the 2021 and 2022 datasets, the input query IDs contain the date of the debate which can be used as a feature to compare with the verified claim dates (cf.…”
Section: Political Debates Datasetsmentioning
confidence: 99%
“…However, given that a particular claim proposition may occur in the form of diverse utterances [4], matching a given statement or utterance to fact-checked claims available from fact-checking portals remains a challenging problem. This problem is known as verified claim retrieval and has been recognised by the Check That!…”
Section: Introductionmentioning
confidence: 99%