2020 11th IEEE Annual Ubiquitous Computing, Electronics &Amp; Mobile Communication Conference (UEMCON) 2020
DOI: 10.1109/uemcon51285.2020.9298158
|View full text |Cite
|
Sign up to set email alerts
|

Using BERT to Extract Topic-Independent Sentiment Features for Social Media Bot Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 70 publications
(31 citation statements)
references
References 23 publications
0
31
0
Order By: Relevance
“…According to work by Heidari et al [11], this research creates a new model that can classify unlabeled data of Covid-19 tweets' texts with high accuracy for misinformation based on the BERT model. Before the sentiment classification of public opinion, we apply LDA topic modeling to collect all related tweets to covid-19 topics.…”
Section: Methodsmentioning
confidence: 99%
“…According to work by Heidari et al [11], this research creates a new model that can classify unlabeled data of Covid-19 tweets' texts with high accuracy for misinformation based on the BERT model. Before the sentiment classification of public opinion, we apply LDA topic modeling to collect all related tweets to covid-19 topics.…”
Section: Methodsmentioning
confidence: 99%
“…Examples of recent methods for knowledge graphs are Relational Graph Convolutional Neural Networks (R-GCN) [104], which are able to extract features from a given data and, accordingly, generate a directed multigraph, label node types, and their relationships in the generated graph, and, finally, generate a latent knowledge-based representation that can be used for node classification as well as link prediction. Other language models, such as Bidirectional Encoder Representations from Transformers (BERT) [18], which use pre-trained language models, and their variations, including Knowledge Graph BERT (KG-BERT) [105] and Knowledge-enabled BERT (K-BERT) [103], can extract node and relation attributes for knowledge graph completion and link prediction [16]. A comprehensive review on embedding methods that are designed for knowledge graphs is available in [3].…”
Section: Learning-based Methodsmentioning
confidence: 99%
“…Applications of link prediction include analyzing user-user and user-content recommendations in online social networks [5,[7][8][9], reconstruction of the PPI (protein-protein interaction) network and reducing the present noise [10][11][12], hyper-link prediction [13], prediction of transportation networks [14], forecasting the behavior of terrorism campaigns and social bots [15,16], reasoning and sensemaking in knowledge graphs [17], and knowledge graph completion while using data augmentation with Bidirectional Encoder Representations from Transformers (BERT) [18,19]. Link prediction in these applications has been mostly investigated through unsupervised graph representation and feature learning methods that are based on the node (local) or path (global) similarity metrics that evaluate the neighboring nodes.…”
Section: Introductionmentioning
confidence: 99%
“…The authors used basic stylistic features (the frequency of function words and part-of-speech trigrams) to classify news documents based on the corresponding publisher (newspaper or magazine) as well as text genre (editorial or news item). Nowadays, computational stylometry has a wide range of applications in literary science [15], forensics [3,4,37], social media analysis [7,9,18,19], psycholinguistics [30,32], and even source code analysis [2,13]. Computational stylometry has been also explored in different languages including Chinese [45], Arabic [17,21], Russian [27,31], and Turkish [1], among others [22,53].…”
Section: Related Workmentioning
confidence: 99%