2021
DOI: 10.3390/app11219857
|View full text |Cite
|
Sign up to set email alerts
|

Social Botomics: A Systematic Ensemble ML Approach for Explainable and Multi-Class Bot Detection

Abstract: OSN platforms are under attack by intruders born and raised within their own ecosystems. These attacks have multiple scopes from mild critiques to violent offences targeting individual or community rights and opinions. Negative publicity on microblogging platforms, such as Twitter, is due to the infamous Twitter bots which highly impact posts’ circulation and virality. A wide and ongoing research effort has been devoted to develop appropriate countermeasures against emerging “armies of bots”. However, the batt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 51 publications
0
9
0
Order By: Relevance
“…Bot-Detective [203] is an explainable Twitter bot detection service with crowdsourcing functionalities that uses LIME. LIME is also used in JITBot [204], An Explainable Just-In-Time Defect Prediction Bot, and in [205], a bot-type classification schema. SHAP and LIME are used in [206] for game BOT detection, while in [207], the authors used a Decision Tree model, Explainable by definition, for automatic detection on Twitter with a particular case study on posts about COVID-19.…”
Section: ) Explainable Artificial Intelligence In Bot(net) Detectionmentioning
confidence: 99%
“…Bot-Detective [203] is an explainable Twitter bot detection service with crowdsourcing functionalities that uses LIME. LIME is also used in JITBot [204], An Explainable Just-In-Time Defect Prediction Bot, and in [205], a bot-type classification schema. SHAP and LIME are used in [206] for game BOT detection, while in [207], the authors used a Decision Tree model, Explainable by definition, for automatic detection on Twitter with a particular case study on posts about COVID-19.…”
Section: ) Explainable Artificial Intelligence In Bot(net) Detectionmentioning
confidence: 99%
“…The accuracy of the model may further decay when dealing with new accounts different from those in the training datasets. These accounts might come from a different context, use different languages other than English [ 52 , 53 ], or show novel behavioral patterns [ 34 , 45 , 54 ]. These limitations are inevitable for all supervised machine learning algorithms, and are the reasons why Botometer has to be upgraded routinely.…”
Section: How Botometer Workmentioning
confidence: 99%
“…The accuracy of the model may further decay when dealing with new accounts different from those in the training datasets. These accounts might come from a different context, use different languages other than English [48,49], or show novel behavioral patterns [43,34,50]. These limitations are inevitable for all supervised machine learning algorithms, and are the reasons why Botometer has to be upgraded routinely.…”
Section: Model Accuracymentioning
confidence: 99%