2020
DOI: 10.1088/1742-6596/1569/2/022077
|View full text |Cite
|
Sign up to set email alerts
|

Extraction System Web Content Sports New Based On Web Crawler Multi Thread

Abstract: Web crawlers are programs that are used by search engines to collect necessary information from the internet automatically according to the rules set by the user. With so much information about sports news on the internet, it takes web crawlers with incredible speed in the process of crawling. There are several previous studies that discussed the process of extracting information in a web document that needs to be considered both in terms of both aspects, including in terms of the structure of the web page and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…In and Pramudita et al (2020) authors designed a heterogeneous collaborative edge cache framework by jointly optimizing node selection and cache replacement in mobile networks. The joint optimization problem is expressed as a Markov Decision Process (MDP), and Deep Q Network (DQN) is used to solve the problem, which alleviates the offloading traffic load.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In and Pramudita et al (2020) authors designed a heterogeneous collaborative edge cache framework by jointly optimizing node selection and cache replacement in mobile networks. The joint optimization problem is expressed as a Markov Decision Process (MDP), and Deep Q Network (DQN) is used to solve the problem, which alleviates the offloading traffic load.…”
Section: Related Workmentioning
confidence: 99%
“…Murthy & Rani (2022) and Fu et al (2022) describe a model for multi-processor IPC based on cache miss rates (Pramudita et al, 2020. Their model of cache delays for single-threaded workloads uses a similar approach.…”
Section: Related Workmentioning
confidence: 99%
“…Other studies used particular techniques for efficient scraping such as the work of Pramudita et al [9] who used a multithreaded technique to construct the web crawler program. In [10], the authors covered the topics of web crew members, including their designs, and other difficulties that arise when search engines utilize Web crawlers.…”
Section: Introductionmentioning
confidence: 99%