2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS) 2018
DOI: 10.1109/iwqos.2018.8624176
|View full text |Cite
|
Sign up to set email alerts
|

Toward Smart and Cooperative Edge Caching for 5G Networks: A Deep Learning Based Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 60 publications
(52 citation statements)
references
References 8 publications
0
50
0
Order By: Relevance
“…Finally, [8] is the closest paper to ours, because it uses an LSTM-NN as a popularity predictor and then manages the cache as a priority queue where, upon a miss, contents with the smallest predicted popularity are evicted. We have implemented the caching policy in [8] and compared it with ours in Sect. 4.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Finally, [8] is the closest paper to ours, because it uses an LSTM-NN as a popularity predictor and then manages the cache as a priority queue where, upon a miss, contents with the smallest predicted popularity are evicted. We have implemented the caching policy in [8] and compared it with ours in Sect. 4.…”
Section: Related Workmentioning
confidence: 99%
“…The core idea of our policy is simple and was naturally adopted by previous works like [12] and [8]: we keep in the cache the contents with the largest estimated popularities. The popularity of a content is defined as the fraction of requests for that content over a meaningful time horizon.…”
Section: Caching Policymentioning
confidence: 99%
See 2 more Smart Citations
“…Due to the complexity of the real environment, these conventional replacement policies cannot accurately capture dynamic characteristics of content popularity [4]. Inspired by the reinforcement learning (RL) in solving complicated control problem [5], the works in [6], [7] relied on strong feature Manuscript representation ability of deep neural network (DNN) [8] and adopted the model-free deep RL (DRL) to maximize the long-term system reward in mobile edge caching. In [6]- [8], the edge node fetches the missed content from the source server and replaces its local cache with newly fetched content.…”
Section: Introductionmentioning
confidence: 99%