2018
DOI: 10.1007/978-3-319-91521-0_32
|View full text |Cite
|
Sign up to set email alerts
|

Changing Perspectives: Is It Sufficient to Detect Social Bots?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(63 citation statements)
references
References 13 publications
0
60
0
1
Order By: Relevance
“…Cresci, Petrocchi, Spognardi, and Tognazzi (2019) proposed the use of evolutionary algorithms to improve social bot skills. Grimme et al (2018) employed a hybrid approach involving automatic and manual actions to achieve bots that would be classified as human by a supervised bot detection system. Despite the good intention of pointing to weaknesses in existing systems, this research might also inspire bot creators and give them a competitive advantage.…”
Section: Bot Detection Methodsmentioning
confidence: 99%
“…Cresci, Petrocchi, Spognardi, and Tognazzi (2019) proposed the use of evolutionary algorithms to improve social bot skills. Grimme et al (2018) employed a hybrid approach involving automatic and manual actions to achieve bots that would be classified as human by a supervised bot detection system. Despite the good intention of pointing to weaknesses in existing systems, this research might also inspire bot creators and give them a competitive advantage.…”
Section: Bot Detection Methodsmentioning
confidence: 99%
“…Second, we present a theoretically informed method for the detection of a disinformation campaign. We argue that past research's predominant focus on automated accounts, famously known as "social bots", (see Howard & Kollanyi, 2016;Varol, Ferrara, Davis, Menczer, & Flammini, 2017) misses its target since reports on recent astroturfing campaigns suggest that they are often at least partially run by actual humans (so-called "cyborgs": Chu, Gianvecchio, Wang, & Jajodia, 2012), which may shield the accounts from detection strategies focused on automated behavior (Grimme, Assenmacher, & Adam, 2018). Using bot detection to study astroturfing betrays a fundamental conceptual mismatch: bots are one tool that can be used in an astroturfing campaign, but not all bots are part of it, and reversely, not all astroturfing accounts are bots.…”
Section: Introductionmentioning
confidence: 95%
“…RQ 3 With regard to the diagnostic ability are there differences with the performance of Botometer between languages? Grimme et al (2018) showed in their study with 3 bots in an experimental setting that the classification score is not stable over time. Therefore, we are interested in not only measuring the Botometer score once but instead tracking accounts over a longer period of time.…”
Section: Research Questionsmentioning
confidence: 99%
“…However, Botometer is not above criticism. Grimme, Assenmacher, and Adam (2018) show in their study, for example, that Botometer could not classify the hybrid and full automation bot accounts that the authors had created precisely. The original creators are also aware of the potential limitations of their tool and admit that "many machine learning algorithms, including those used by Botometer, do not make it easy to interpret their classifications" (Yang et al, 2019, p. 58).…”
Section: Introductionmentioning
confidence: 98%