2019
DOI: 10.1109/tetc.2017.2669404
|View full text |Cite
|
Sign up to set email alerts
|

Who You Should Not Follow: Extracting Word Embeddings from Tweets to Identify Groups of Interest and Hijackers in Demonstrations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…The CBOW and Skip-gram perform similarly, although Skip-gram is more useful and gives a better outcome for infrequent words [62] . In our study, we are concerned with frequent words, and therefore adopted CBOW for word2vec training.…”
Section: Methodsmentioning
confidence: 95%
“…The CBOW and Skip-gram perform similarly, although Skip-gram is more useful and gives a better outcome for infrequent words [62] . In our study, we are concerned with frequent words, and therefore adopted CBOW for word2vec training.…”
Section: Methodsmentioning
confidence: 95%
“…In a different line of research, the impact of malicious accounts on political tweet sentiment was explored [36], [37]. Some of these approaches have proposed semisupervised methods (using word embeddings) to classify Twitter hashtags and detect groups of interest and potential thread hijackers commenting on political events.…”
Section: A Related Workmentioning
confidence: 99%
“…Some of these approaches have proposed semisupervised methods (using word embeddings) to classify Twitter hashtags and detect groups of interest and potential thread hijackers commenting on political events. Others have built datasets using convolutional neural networks trained on the sentiment140 dataset [36]. In another example of the use of word embeddings [38], a neural network was trained to learn word cooccurrences and generate word vectors from a corpus of four million political tweets extracted during the EU referendum of 23 June 2016.…”
Section: A Related Workmentioning
confidence: 99%
“…Then, the vector representation for t is w t . The authors of this paper have worked in tweets modeling with word2vec in previous research projects, and the detailed methodology which covers tweets cleaning/pre-processing and text modeling is explained in [15]. It is worth mentioning that the tweets are being represented as 300-dimension vectors.…”
Section: Tweets Modelingmentioning
confidence: 99%