2017
DOI: 10.1007/978-3-319-67256-4_6
|View full text |Cite
|
Sign up to set email alerts
|

Stance Classification in Out-of-Domain Rumours: A Case Study Around Mental Health Disorders

Abstract: Abstract. Social media being a prolific source of rumours, stance classification of individual posts towards rumours has gained attention in the past few years. Classification of stance in individual posts can then be useful to determine the veracity of a rumour. Research in this direction has looked at rumours in different domains, such as politics, natural disasters or terrorist attacks. However, work has been limited to in-domain experiments, i.e. training and testing data belong to the same domain. This pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 15 publications
0
9
0
Order By: Relevance
“…Additionally, the current credibility prediction models face difficulty when they are applied to different events (Boididou et al, 2014;Aker et al, 2017), as the performance accuracy is overestimated. Our results found that there are significant differences of content of the same credibility level and topic when generated from different locations.…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, the current credibility prediction models face difficulty when they are applied to different events (Boididou et al, 2014;Aker et al, 2017), as the performance accuracy is overestimated. Our results found that there are significant differences of content of the same credibility level and topic when generated from different locations.…”
Section: Discussionmentioning
confidence: 99%
“…In total we included the following features, which turned out to be useful in prior research in the area of stance detection in Twitter communication [53] and were already annotated in the data set: for the authorrelated features we used authors Twitter account description, length of the account description, and role (refers to the relation between follower and followee number), for the message-related features we took URL included, location included, person included, date included, negation included, Google bad word included (using a dictionary from Google to check if the tweet contains slang words), geo information enabled, average word length, and for the meta-informational features we comprised originality (refers to the number of tweets of a user), number of followers, engagement (refers to the number of tweets related to user account age) and sentiment (describes on a scale ranging from positive to negative the valence of the tweet with an assigned value between 0…”
Section: Insert Figure 1 About Herementioning
confidence: 99%
“…PHEME5+Aug+boston is PHEME5+Aug combined with the "bostonbombings". We employ Kochkina et al [4]'s method as a SoA baseline model of rumor detection with three modifications 9 . In their model, source tweets and replies are represented as 300dimensional Word2Vec word embeddings pre-trained on the Google News data set 10 .…”
Section: Rumor Detectionmentioning
confidence: 99%
“…This restricts a further exploration of NNs for representation learning through many layers of nonlinear processing units and different levels of abstraction [8], which results in overfitting and generalization concerns. The scarcity of labeled data is a major challenge facing for the research of rumors in social media [9]. Another problem is that publicly available data sets for rumor-related tasks suffer from imbalanced class distributions [10,4].…”
Section: Introductionmentioning
confidence: 99%