2014 13th International Conference on Machine Learning and Applications 2014
DOI: 10.1109/icmla.2014.81
|View full text |Cite
|
Sign up to set email alerts
|

TSD: Detecting Sybil Accounts in Twitter

Abstract: Fake identities and user accounts (also called "Sybils") in online communities represent today a treasure for adversaries to spread fake product reviews, malware and spam on social networks, and astroturf political campaigns. State-of-the-art in the defense mechanisms includes Automated Turing Tests (ATTs such as CAPTCHAs) and graph-based Sybil detectors. Sybil detectors in social networks leverage the assumption that Sybils will find it hard to befriend real users which leads to Sybils being connected to each… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(28 citation statements)
references
References 14 publications
(12 reference statements)
0
27
0
Order By: Relevance
“…However, most of the botnet fake accounts are flagged within six months of their creation. This may be due to the multiple classifiers developed by the community in the past year to flag bots' accounts and suspend them [1,2,7]. SMF service customers must therefore see their follower count decrease as the fake accounts are flagged.…”
Section: Discussionmentioning
confidence: 99%
“…However, most of the botnet fake accounts are flagged within six months of their creation. This may be due to the multiple classifiers developed by the community in the past year to flag bots' accounts and suspend them [1,2,7]. SMF service customers must therefore see their follower count decrease as the fake accounts are flagged.…”
Section: Discussionmentioning
confidence: 99%
“…Once the post thread starts, those malicious accounts would be the first few users to lead the discussion. Second strategy is that this duplicate messages has been propagated 10 times by one account (Figure 10), most of them appeared in the early stage of targets posts, the commenting time vector of this 1,8,9,11,12,13,14,18]. Here we examine the mean and standard deviation between malicious accounts and normal accounts, the result is as Figure 11.…”
Section: B Account Digital Footprintmentioning
confidence: 99%
“…Commonly used profile characteristics are age, image, description, number of followers, geolocation, and total number of posts. These data are used to construct classification systems and supervised learning algorithms aimed at identifying and blocking malicious accounts [9] [10] [11] [12] [13]. OSN connectivity and interaction features have been used to experiment with graph-based approaches to identifying accounts that exhibit inappropriate behaviors.…”
Section: Related Workmentioning
confidence: 99%
“…While considering the two classes a) fake and b) human, the experiment was conducted for each case mentioned above. To evaluate the final outcomes of these experiments some metrics were considered based on standard indicators, namely [13]: Overfitting of a model occurs when one trains a model too well on their training set. This results in a good accuracy during training but when we run the same model for testing, the accuracy we get is not up to the mark.…”
Section: Prediction and Evaluation Criteriamentioning
confidence: 99%
“…With such widespread access and easy-to-use interfaces, it became a suitable domain for Sybil accounts. Sybil accounts are generally fake accounts [13]. These accounts are primarily created for increasing the followers of a targeted account.…”
Section: Introductionmentioning
confidence: 99%