2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2016
DOI: 10.1109/allerton.2016.7852345
|View full text |Cite
|
Sign up to set email alerts
|

Multi-library coded caching

Abstract: Abstract-We study the problem of coded caching when the server has access to several libraries and each user makes independent requests from every library. The single-library scenario has been well studied and it has been proved that coded caching can significantly improve the delivery rate compared to uncoded caching. In this work we show that when all the libraries have the same number of files, memory-sharing is optimal and the delivery rate cannot be improved via coding across files from different librarie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Reception at the users is affected by the fronthaul quantization noise, as well as by the channel noise. If the quantization rate is properly chosen, it can be proved that the achievable NDT is (70), where the term K/ min{M, K} is the edge-NDT in (19), which is the same as for the ideal ZF scheme, and the term K/(r min{M, K}) is the fronthaul-NDT (18). A more detailed discussion is provided next.…”
Section: A Standard Soft-transfer Fronthaulingmentioning
confidence: 99%
“…Reception at the users is affected by the fronthaul quantization noise, as well as by the channel noise. If the quantization rate is properly chosen, it can be proved that the achievable NDT is (70), where the term K/ min{M, K} is the edge-NDT in (19), which is the same as for the ideal ZF scheme, and the term K/(r min{M, K}) is the fronthaul-NDT (18). A more detailed discussion is provided next.…”
Section: A Standard Soft-transfer Fronthaulingmentioning
confidence: 99%
“…This revolutionary idea has sprung a wealth of research efforts, such as device-to-device (D2D) networks [50], [51], non-uniform content popularities [52], online caching policies [53], multi-servers [54], multi-library [55], and combination with CSI [56]. From the implementation point of view, promising research directions include extensions to capture system aspects such as (i) popularity skew, (ii) asynchronous requests, (iii) finite code lengths and (iv) cache sizes that scale slower than M .…”
Section: B Coded Caching For Broadcast Medium Exploitationmentioning
confidence: 99%
“…In our setup, each receiver demands a file from only one of the two library. This differs from the setup in [8] (which does not impose secrecy constraints), where each receiver demands a file from each libraries. Like in the standard coded caching scenario, the transmitter ignores the receivers' demands during the placement phase.…”
Section: Introductionmentioning
confidence: 99%