2020
DOI: 10.1007/978-3-030-57855-8_12
|View full text |Cite
|
Sign up to set email alerts
|

Vocabulary-Based Method for Quantifying Controversy in Social Media

Abstract: Identifying controversial topics is not only interesting from a social point of view, it also enables the application of methods to avoid the information segregation, creating better discussion contexts and reaching agreements in the best cases. In this paper we develop a systematic method for controversy detection based primarily on the jargon used by the communities in social media. Our method dispenses with the use of domain-specific knowledge, is language-agnostic, efficient and easy to apply. We perform a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…Although these approaches are language and domain-independent and can then be applied easily to any topic discussion, it nevertheless presents the drawback of not taking advantage of extra information. Some works attempted to overcome these limits by exploiting for instance named entities to infer the tendency nature (positive, negative, neutral) of users towards some given named entities [23], and user's vocabulary to cluster users with more similarities in their vocabularies [24]. Some recent works consider controversy detection as a graph classification problem [3].…”
Section: Controversy Detection and Quantificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Although these approaches are language and domain-independent and can then be applied easily to any topic discussion, it nevertheless presents the drawback of not taking advantage of extra information. Some works attempted to overcome these limits by exploiting for instance named entities to infer the tendency nature (positive, negative, neutral) of users towards some given named entities [23], and user's vocabulary to cluster users with more similarities in their vocabularies [24]. Some recent works consider controversy detection as a graph classification problem [3].…”
Section: Controversy Detection and Quantificationmentioning
confidence: 99%
“…From the data we got access to, some tweets might be missing in our datasets, depending on the topic, as tweets could have been deleted since the last time it was retrieved in [4]. The resulting dataset consists of 30 topics with their number of tweets ranging from 5 458 to 36 716, involving a number of users ranging from 3 696 to 161 612 per topic [24]. Table 2 resumes the frequencies of tweets, re-tweets, users, and users with at least one published tweet in the corresponding topic.…”
Section: Datasetmentioning
confidence: 99%
“…Although these approaches are language and domain-independent and can then be applied easily to any topic discussion, it nevertheless presents the drawback of not taking advantage of extra information. Some works attempted to overcome these limits by exploiting for instance named entities to infer the tendency nature (positive, negative, neutral) of users towards some given named entities [17], and user's vocabulary to cluster users with more similarities in their vocabularies [18]. Some recent works consider controversy detection as a graph classification problem [2].…”
Section: Controversy Detection and Quantificationmentioning
confidence: 99%
“…The resulting dataset consists of 30 topics with their number of tweets ranging from 5 458 to 36 716, involving a number of users ranging from 3 696 to 161 612 per topic. [18]. Textual features used to explain communities being topic dependent, we based the explainability section (section 3.4) on only 2 topics for simplification purposes, one being controversial (pelosi) and one non-controversial (thanksgiving).…”
Section: Datasetmentioning
confidence: 99%