2013 IEEE International Conference on Cluster Computing (CLUSTER) 2013
DOI: 10.1109/cluster.2013.6702617
|View full text |Cite
|
Sign up to set email alerts
|

Mercury: Enabling remote procedure call for high-performance computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
3
3

Relationship

2
7

Authors

Journals

citations
Cited by 64 publications
(23 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…With the evolving of communication technology, the algorithm simulation and test verification of TD-LTE become more and more needful in the most close to the practical application of scene. Previous algorithms can be verified in the system simulation platform [1] [2]. If the link-level is abstracted for the ideal performance map, the mapping complex scene is increasingly unable to reflect the real environment.…”
Section: Introductionmentioning
confidence: 99%
“…With the evolving of communication technology, the algorithm simulation and test verification of TD-LTE become more and more needful in the most close to the practical application of scene. Previous algorithms can be verified in the system simulation platform [1] [2]. If the link-level is abstracted for the ideal performance map, the mapping complex scene is increasingly unable to reflect the real environment.…”
Section: Introductionmentioning
confidence: 99%
“…For load-balancing, all data and metadata are distributed across all nodes using the HPC RPC framework Mercury [34]. The file system runs in user-space and can be easily deployed in under 20 seconds on a 512 node cluster by any user.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, we had previous implementations based on MPI-2.2 RMA and on two-sided emulation of the one-sided communications [7]. The performance results of this RPC-based I/O protocol over two-sided emulation of MPI one-sided routines are available for Cray interconnect and InfiniBand in [7].…”
Section: Bulk Data Transfer and Mpi One-sided Communicationsmentioning
confidence: 99%
“…It also defers flow-control to the I/O server and eases the burden of high ratio of compute nodes over I/O nodes in current and future HPC systems. The work on Mercury [7], which is centered around RPC over the same protocol, provides in-depth comparisons with other existing RPC frameworks and protocols.…”
Section: Register Memorymentioning
confidence: 99%