2020
DOI: 10.48550/arxiv.2007.15859
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Forward Reuse Distance

Pengcheng Li,
Yongbin Gu

Abstract: Caching techniques are widely used in the era of cloud computing from applications, such as Web caches to infrastructures, Memcached and memory caches in computer architectures. Prediction of cached data can greatly help improve cache management and performance. The recent advancement of deep learning techniques enables the design of novel intelligent cache replacement policies.In this work, we propose a learning-aided approach to predict future data accesses. We find that a powerful LSTM-based recurrent neura… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…The replacer in Catcher then uses P LRU∼LFU to decide which replacement policy to choose when the cache misses, which avoids the need to predict every block in the cache (replacement policy will determine which block is replaced, Catcher does not need to specify exactly). Compared with previous studies (Song et al 2020;Liu et al 2020;Shi et al 2019;Li and Gu 2020), Catcher can reduce large-scale operations and improve operational efficiency (when there are many blocks in the cache, the computation overhead and time delay of predicting all blocks will be huge when each round of requests arrives). The actions collected by AW in Catcher come from the second half of s t and the first half of s t−1 to reflect the probability distribution of the replacement policy when the state changes from s t−1 to s t .…”
Section: Ddpg For Catchermentioning
confidence: 91%
See 1 more Smart Citation
“…The replacer in Catcher then uses P LRU∼LFU to decide which replacement policy to choose when the cache misses, which avoids the need to predict every block in the cache (replacement policy will determine which block is replaced, Catcher does not need to specify exactly). Compared with previous studies (Song et al 2020;Liu et al 2020;Shi et al 2019;Li and Gu 2020), Catcher can reduce large-scale operations and improve operational efficiency (when there are many blocks in the cache, the computation overhead and time delay of predicting all blocks will be huge when each round of requests arrives). The actions collected by AW in Catcher come from the second half of s t and the first half of s t−1 to reflect the probability distribution of the replacement policy when the state changes from s t−1 to s t .…”
Section: Ddpg For Catchermentioning
confidence: 91%
“…Other classes see (Park and Park 2017) in detail.). Li and Gu (2020) characterize the patterns of these workloads on a basis of time-series reuse distance trend and classify these workloads into six patterns like Triangle, Clouds, and so on. Similar work includes Chakraborttii and Litz (2020), Rodriguez et al (2021), etc.…”
Section: Workload Distributionmentioning
confidence: 99%
“…Early Exit [12,118,126,161,194,244,245,249,271,282,313] Model Selection [159,191,271,314] Result Cache [13,39,53,92,93,96,108,112,114,123,209,268,293,319] 3.3.1 Model Compression: Model compression techniques facilitate the deployment of resource-hungry AI models into resourceconstrained EDGE servers by reducing the complexity of the DNN. Model compression exploits the sparse nature of gradients' and computation involved while training the DNN model.…”
Section: Model Compressionmentioning
confidence: 99%