2020
DOI: 10.7249/rr2705
|View full text |Cite
|
Sign up to set email alerts
|

Counter-Radicalization Bot Research: Using Social Bots to Fight Violent Extremism

Abstract: Limited Print and Electronic Distribution RightsThis document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For inform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 1 publication
(1 reference statement)
0
1
0
Order By: Relevance
“…Twitter accounts controlled by automated programs, also known as Twitter bots, have become a widely recognized, concerning, and studied phenomenon (Ferrara et al, 2016a;Aiello et al, 2012). Twitter bots have been deployed with malicious intents, such as disinformation spread (Cui et al, 2020;Lu and Li, 2020;Huang et al, 2022), interference in elections (Howard et al, 2016;Bradshaw et al, 2017;Ferrara et al, 2016a;Rossi et al, 2020), promotion of extremism (Ferrara et al, 2016b;Marcellino et al, 2020), and the spread of conspiracy theories (Ferrara, 2020;Ahmed et al, 2020). These triggered the development of automatic Twitter bot detection models aiming at mitigating harms from bots' malicious interference (Cresci, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Twitter accounts controlled by automated programs, also known as Twitter bots, have become a widely recognized, concerning, and studied phenomenon (Ferrara et al, 2016a;Aiello et al, 2012). Twitter bots have been deployed with malicious intents, such as disinformation spread (Cui et al, 2020;Lu and Li, 2020;Huang et al, 2022), interference in elections (Howard et al, 2016;Bradshaw et al, 2017;Ferrara et al, 2016a;Rossi et al, 2020), promotion of extremism (Ferrara et al, 2016b;Marcellino et al, 2020), and the spread of conspiracy theories (Ferrara, 2020;Ahmed et al, 2020). These triggered the development of automatic Twitter bot detection models aiming at mitigating harms from bots' malicious interference (Cresci, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Automated users on Twitter, also known as Twitter bots, have become a widely known and welldocumented phenomenon. Over the past decade, malicious Twitter bots were responsible for a wide range of problems such as online disinformation [Cui et al, 2020, Lu and Li, 2020, election interference [Howard et al, 2016, Neudert et al, 2017, Rossi et al, 2020, Ferrara, 2017, extremism campaign [Ferrara et al, 2016, Marcellino et al, 2020, and even the spread of conspiracy theories [Ferrara, 2020, Ahmed et al, 2020, Anwar et al, 2021. These societal challenges have called for automatic Twitter bot detection models to mitigate their negative influence.…”
Section: Introductionmentioning
confidence: 99%