2020
DOI: 10.1017/s135132492000056x
|View full text |Cite
|
Sign up to set email alerts
|

Automatic classification of participant roles in cyberbullying: Can we detect victims, bullies, and bystanders in social media text?

Abstract: Successful prevention of cyberbullying depends on the adequate detection of harmful messages. Given the impossibility of human moderation on the Social Web, intelligent systems are required to identify clues of cyberbullying automatically. Much work on cyberbullying detection focuses on detecting abusive language without analyzing the severity of the event nor the participants involved. Automatic analysis of participant roles in cyberbullying traces enables targeted bullying prevention strategies. In this pape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 62 publications
0
19
0
Order By: Relevance
“…Many of the classifiers are still able to only categorise content or behaviours as abusive or not abusive, for instance, based on hateful language or slurs, and they do not provide a more nuanced description of the type of abuse involved, or the severity of the case; or roles played by those involved (for examples of multi-class classifiers; see, e.g. Balakrishnan et al, 2019;Jacobs et al, 2020). Furthermore, for a case of abuse to be classified as "cyberbullying", there still (according to widely used definitions) typically needs to be some level of repetition, intent to hurt and even power imbalance (although said definitions are under revision and tend to be intensely debated) 1 and classifiers for the most part do not capture those (Cheng et al, 2020).…”
Section: Why Are Abuse Cyberbullying and Harassment Difficult To Mode...mentioning
confidence: 99%
“…Many of the classifiers are still able to only categorise content or behaviours as abusive or not abusive, for instance, based on hateful language or slurs, and they do not provide a more nuanced description of the type of abuse involved, or the severity of the case; or roles played by those involved (for examples of multi-class classifiers; see, e.g. Balakrishnan et al, 2019;Jacobs et al, 2020). Furthermore, for a case of abuse to be classified as "cyberbullying", there still (according to widely used definitions) typically needs to be some level of repetition, intent to hurt and even power imbalance (although said definitions are under revision and tend to be intensely debated) 1 and classifiers for the most part do not capture those (Cheng et al, 2020).…”
Section: Why Are Abuse Cyberbullying and Harassment Difficult To Mode...mentioning
confidence: 99%
“…Similarly, more perpetrators among [81] recognized bullying temporal patterns in the predator's questions using time series modeling methodology. Jacobs et al [161] tried to identify automatically different participants in a cyberbullying event from textual cyberbullying traces. [161] tried to identify automatically various participants in a cyberbullying event from textual cyberbullying traces.…”
Section: Automated Cyberbullying Monitoring and Intervention Systemmentioning
confidence: 99%
“…Jacobs et al [161] tried to identify automatically different participants in a cyberbullying event from textual cyberbullying traces. [161] tried to identify automatically various participants in a cyberbullying event from textual cyberbullying traces. Although the F1-score of their best model is not very high (56.7%), there is a lot of scope in this area.…”
Section: Automated Cyberbullying Monitoring and Intervention Systemmentioning
confidence: 99%
“…By adopting features used in [28], [65], we extracted six categories of features called term lists, which are derived from one binary feature. All redundant terms were discarded and only 2950 unique terms were listed.…”
Section: Term Listmentioning
confidence: 99%