2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom) 2017
DOI: 10.1109/cloudcom.2017.40
|View full text |Cite
|
Sign up to set email alerts
|

A Tale of Two Systems: Using Containers to Deploy HPC Applications on Supercomputers and Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 61 publications
(31 citation statements)
references
References 17 publications
0
27
0
4
Order By: Relevance
“…Docker is found to have significant overhead for multi-node MPI applications (Younge et al 2017;Zhang et al 2017); Singularity achieves better performance by directly using the MPI installed on the host machine, but requires compatibility between the MPI libraries inside and outside of the container. These issues will be addressed in future work.…”
Section: Discussionmentioning
confidence: 99%
“…Docker is found to have significant overhead for multi-node MPI applications (Younge et al 2017;Zhang et al 2017); Singularity achieves better performance by directly using the MPI installed on the host machine, but requires compatibility between the MPI libraries inside and outside of the container. These issues will be addressed in future work.…”
Section: Discussionmentioning
confidence: 99%
“…On the native non-container environment, in contrast, even if the user installs exactly the same software versions with Spack, there is still chance for operating-system-specific errors to occur, especially for legacy software modules that are not extensively tested on multiple platforms. Despite some pioneering examples of using containers within multi-node MPI environments (Younge et al, 2017;Zhang et al, 2017), such usage can still be challenging in practice, as stated in the Singularity container documentation (https://sylabs.io/guides/3.3/user-guide/mpi.html) that: "the MPI in the container must be compatible with the version of MPI available on the host" and "the configuration of the MPI implementation in the container must be configured for optimal use of the hardware if performance is critical". In this work, we use the native system without containers.…”
Section: Appendix B Approaches To Install Hpc Software Libraries On mentioning
confidence: 99%
“…Figure 2 shows the details of a possible solution for this example. However, if c 3 was assigned to n 4 , the remaining resources of n 3 would be Rem(n 3 ) = ⟨2, 1024⟩, which are enough to be deployed for c 8 . However, if c 3 was assigned to n 4 , the remaining resources of n 3 would be Rem(n 3 ) = ⟨2, 1024⟩, which are enough to be deployed for c 8 .…”
Section: Container Scheduling Problem Formulationmentioning
confidence: 99%
“…It can be seen that container c 8 is not assigned to any node because none of them has enough remaining resources to allocate this container. However, if c 3 was assigned to n 4 , the remaining resources of n 3 would be Rem(n 3 ) = ⟨2, 1024⟩, which are enough to be deployed for c 8 . This example demonstrates that not every valid solution is optimal.…”
Section: Container Scheduling Problem Formulationmentioning
confidence: 99%