2017
DOI: 10.1109/tmc.2016.2575828
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Mobile Malicious Webpages in Real Time

Abstract: Mobile specific webpages differ significantly from their desktop counterparts in content, layout and functionality. Accordingly, existing techniques to detect malicious websites are unlikely to work for such webpages. In this paper, we design and implement kAYO, a mechanism that distinguishes between malicious and benign mobile webpages. kAYO makes this determination based on static features of a webpage ranging from the number of iframes to the presence of known fraudulent phone numbers. First, we experimenta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
59
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(60 citation statements)
references
References 28 publications
1
59
0
Order By: Relevance
“…However, Alexa only provides legitimate domain names, which do not capture the true nature and variety of complete URLs that a user can come across in a realworld scenario. Only a few authors go the extra mile to include a detailed description of how the data was collected [200].…”
Section: Opportunities For Future Researchmentioning
confidence: 99%
“…However, Alexa only provides legitimate domain names, which do not capture the true nature and variety of complete URLs that a user can come across in a realworld scenario. Only a few authors go the extra mile to include a detailed description of how the data was collected [200].…”
Section: Opportunities For Future Researchmentioning
confidence: 99%
“…To extract the feature representation from the lexical and static components of a web page, the machine learning models rely on the assumption that the infrastructure of phishing pages are different from legitimate pages. For example, in [7], phishing web pages are automatically detected based on handcrafted features extracted from the URL, HTML content, network, and JavaScript of a web page. Furthermore, natural language processing techniques are currently used to extract specific features such as the number of common phishing words, type of ngram, etc.…”
Section: A Problem Definitionmentioning
confidence: 99%
“…We inspected each of the 47 social networks manually, and removed 37 of them for one of the following reasons: (i) 1 See, https://en.wikipedia.org/wiki/List of social networking websites social networks that no longer exist, (ii) we were unable to create user accounts 2 , (iii) the social network is ranked too low in the Alexa Top 1M, (iv) platforms that do not support link sharing (e.g., Soundcloud), (v) platforms that require Premium subscriptions, (vi) social networks that merged with already discarded ones, and (vii) posting prevented due to bot detection. Table II lists the 10 social networks that we used for the study of this paper.…”
Section: ) Social Networkmentioning
confidence: 99%
“…Similarly, Thomas et al [34] presented a technique to evaluate URLs shared not only on social networks but also on other web services such as blogs and webmails. In another line of work, the detection of malicious pages focused on inspecting their content, for both desktop browsers (e.g., Canali et al [6]) and mobile browsers (e.g., Amrutkar et al [1]). As opposed to these works, our paper does not present a detection technique, but it studies how social platforms behave when preparing previews of malicious URLs.…”
Section: Detection Of Malicious Contentmentioning
confidence: 99%