2018
DOI: 10.14778/3236187.3236209
|View full text |Cite
|
Sign up to set email alerts
|

Efficient distributed memory management with RDMA and caching

Abstract: Recent advancements in high-performance networking interconnect significantly narrow the performance gap between intra-node and inter-node communications, and open up opportunities for distributed memory platforms to enforce cache coherency among distributed nodes. To this end, we propose GAM, an efficient distributed in-memory platform that provides a directory-based cache coherence protocol over remote direct memory access (RDMA). GAM manages the free memory distributed among multiple nodes to provide a unif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 79 publications
(32 citation statements)
references
References 42 publications
(64 reference statements)
0
30
0
Order By: Relevance
“…Hence, the window slicing operator γ is also implemented inside initWindow. Thereafter, for each qualified sliding position of the time window M c .A.W, we compute the causative behavior and allocate the current user to the respective cohort (line [12][13][14]. For dependent behavior measurement, we also first evaluate all the involved time window attributes for each time slice (line 16).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Hence, the window slicing operator γ is also implemented inside initWindow. Thereafter, for each qualified sliding position of the time window M c .A.W, we compute the causative behavior and allocate the current user to the respective cohort (line [12][13][14]. For dependent behavior measurement, we also first evaluate all the involved time window attributes for each time slice (line 16).…”
Section: Methodsmentioning
confidence: 99%
“…There have been many frameworks for distributed computation [13,14,15,30,31]. Such frameworks, however, cannot be used for our purpose as the cohort operators cannot be readily mapped into their computation models.…”
Section: Distributed Processingmentioning
confidence: 99%
See 1 more Smart Citation
“…When large amounts of fragmented data are stored across a large number of devices, data have to be processed locally and/or transferred to the cloud for large scale analytics. For local data management and resource sharing over devices, efficient and light data management is required, possibly with some form of distributed shared memory [11].…”
Section: Challenges and Opportunitiesmentioning
confidence: 99%
“…Gu et al [19] design a remote memory paging system with RDMA that can run applications that do not fit in local memory without modification. Cai et al [8] design GAM, a new distributed in-memory platform that provides cache coherence with RDMA.…”
Section: Related Workmentioning
confidence: 99%