2009 Second International Conference on Emerging Trends in Engineering &Amp; Technology 2009
DOI: 10.1109/icetet.2009.124
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of Web Crawler

Abstract: A program that automatically download pages from the World Wide Web is known as Web Crawlers or Spider programs or Bots. A crawler visits many sites to obtain data which is analyzed and mined in a location, either online or off-line. There is a need for crawlers to help applications stay as pages and connections are added, deleted, moved or modified. Crawlers can be used in applications in business intelligence. Crawlers are used by organizations to collect information about their competitors and potential col… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
1

Year Published

2012
2012
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(7 citation statements)
references
References 5 publications
0
4
0
1
Order By: Relevance
“…This is due to the fact that the security measures are primarily concerned with unsolicited bulk messages and not against software entities behaving like normal users. Generally speaking, given the high percentage of users connected participating in OSN activities, it is easy to collect information about persons, even in an automated manner, with batch procedures, through web spiders (Gupta and Johari, 2009) and software agents (Bodorik and Jutla, 2008).…”
Section: Data Gathering For Social Engineeringmentioning
confidence: 99%
“…This is due to the fact that the security measures are primarily concerned with unsolicited bulk messages and not against software entities behaving like normal users. Generally speaking, given the high percentage of users connected participating in OSN activities, it is easy to collect information about persons, even in an automated manner, with batch procedures, through web spiders (Gupta and Johari, 2009) and software agents (Bodorik and Jutla, 2008).…”
Section: Data Gathering For Social Engineeringmentioning
confidence: 99%
“…El objetivo principal de un Web crawler es proporcionar datos actualizados a un motor de búsqueda [36]. Son utilizados principalmente para crear una copia de todas las páginas rastreadas para su posterior procesamiento por un motor de búsqueda luego de ser indexadas para proporcionar resultados de una forma rápida [37].…”
Section: Funcionamiento De Un Crawlerunclassified
“…They read and parse Web pages, and then move to hyper links on these pages, looking for more pages to process. The performance of Web crawlers is based on their selection policy, revisit policy, politeness policy, and parallelization policy (Gupta and Johari, 2009;Peisu et al, 2008).…”
Section: Introductionmentioning
confidence: 99%