2022 IEEE Symposium on Security and Privacy (SP) 2022
DOI: 10.1109/sp46214.2022.9833706
|View full text |Cite
|
Sign up to set email alerts
|

TrollMagnifier: Detecting State-Sponsored Troll Accounts on Reddit

Abstract: Growing evidence points to recurring influence campaigns on social media, often sponsored by state actors aiming to manipulate public opinion on sensitive political topics. Typically, campaigns are performed through instrumented accounts, known as troll accounts; despite their prominence, however, little work has been done to detect these accounts in the wild. In this paper, we present TROLLMAGNIFIER, a detection system for troll accounts. Our key observation, based on analysis of known Russian-sponsored troll… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 66 publications
0
3
0
Order By: Relevance
“…Karamshuk et al (Karamshuk et al 2016) study the linguistic choices for political agendas, and propose a natural language processing algorithm to identify partisan bias for Twitter users. Recently, Sakketou et al (Sakketou et al 2022) introduced the first Reddit dataset targeting users who spread fake partisan news, namely FACTOID. This dataset captures users' historical posts and interaction data and is validated by a psycho-linguistic feature analysis for bias classification.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Karamshuk et al (Karamshuk et al 2016) study the linguistic choices for political agendas, and propose a natural language processing algorithm to identify partisan bias for Twitter users. Recently, Sakketou et al (Sakketou et al 2022) introduced the first Reddit dataset targeting users who spread fake partisan news, namely FACTOID. This dataset captures users' historical posts and interaction data and is validated by a psycho-linguistic feature analysis for bias classification.…”
Section: Related Workmentioning
confidence: 99%
“…Subsequently, a large amount of biased and partisan news has been shared online (YarAdua et al 2022;Osmundsen et al 2022) , making this an interesting case study. Recent works on Reddit have shed light on the important role of partisan news sharing in analyzing the propagation of political narratives (Hanley, Kumar, and Durumeric 2022) and troll accounts (Saeed et al 2022). These works both reveal the importance of understanding the evolution of media dissemination, users' responses to partisan news and the behavior of users who spread partisan news.…”
Section: Introductionmentioning
confidence: 99%
“…As social media platforms may not timely detect and remove potentially harmful accounts, researchers proposed methods to detect and analyze such accounts independently. Such accounts include users with political agenda [5], trolls [24], and social media bots, which we focus extensively on in this paper. Past researched focus on detecting bots using their profile features [12,18,20,31], content features [30], graph features [1] and temporal activity [3,21], or a combination of them [6,17,25].…”
Section: Twitter Bot Detectionmentioning
confidence: 99%
“…Arguably, mitigating it faces some unique challenges. First, while some malicious actors spread misleading/false claims to advance their goals ("disinformation"), false narratives are often believed by real users in good faith, who then re-share them on social media ("misinformation") [42,81,85,89,93,102,105]. Second, identifying what is true or false is challenging, hard to automate, and often depends on external fact-checkers.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, online platforms are often concerned about the effects of taking action on dis-and misinformation; for example, limiting what is allowed to be said on a platform can raise concerns about censorship and reduce engagement (and thus profit) [33,39,61]. Nevertheless, the computer security research community is well poised to develop effective mitigation strategies for the problem of false online information, as highlighted by recent research in top tier venues in the field [58,76,81].…”
Section: Introductionmentioning
confidence: 99%