2022
DOI: 10.1016/j.fsidi.2022.301446
|View full text |Cite
|
Sign up to set email alerts
|

Using deep learning to detect social media ‘trolls’

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 11 publications
0
1
0
Order By: Relevance
“…The first method involves directly extracting the text from the image for analysis. Optical character recognition (OCR) engines like Tesseract are commonly used for this purpose [21][22][23]. Experiments have demonstrated high detection accuracy in various applications, including text detection on book spines and traffic signs [24,25].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The first method involves directly extracting the text from the image for analysis. Optical character recognition (OCR) engines like Tesseract are commonly used for this purpose [21][22][23]. Experiments have demonstrated high detection accuracy in various applications, including text detection on book spines and traffic signs [24,25].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Few of the relevant ones are described below: Text-based troll detection -Significant research has been conducted on troll detection in text in high-resource languages like English, Spanish and French [19][20][21][22][23] and majority of the earlier works concentrated on traditional features like TF-IDF, word count, number of characters in the post, average length of words, etc., to train various ML models, such as SVM [24,25], Multinomial Naïve Bayes (MNB) [26], and LR [27]. Subsequent to the advancement in DL models, researchers implemented Bi-LSTM (using Global Vectors word embeddings (GloVe) [28], CNN (using keras embeddings) [29], and transformers (using Bidirectional Encoder Representation from Transformers (BERT) vectors) [30], to detect trolling comments in text data. Transformer models outperformed other models due to their ability to effectively capture contextual dependencies, leading to higher accuracy in various studies [30].…”
Section: Related Workmentioning
confidence: 99%
“…As such, Guideline D1 directs platforms not only to remove acts of trolling but also to detect those who intend to troll, with the goal of changing their intentions and behaviour. In recent years there has been much effort to improve the detection of trolls (see, e.g., MacDermott et al, 2022).…”
Section: Design Optionsmentioning
confidence: 99%