Proceedings of the Tenth European Conference on Computer Systems 2015
DOI: 10.1145/2741948.2741962
|View full text |Cite
|
Sign up to set email alerts
|

Popcorn

Abstract: The recent possibility of integrating multiple-OS-capable, high-core-count, heterogeneous-ISA processors in the same platform poses a question: given the tight integration between system components, can a shared memory programming model be adopted, enhancing programmability? If this can be done, an enormous amount of existing code written for shared memory architectures would not have to be rewritten to use a new programming paradigm (e.g., code offloading) that is often very expensive and error prone. We prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 56 publications
(8 citation statements)
references
References 17 publications
0
7
0
1
Order By: Relevance
“…Traditionally, developers have used the message passing interface (MPI) to distribute execution across nodes [11]. Deemed the "assembly language of parallel processing" [18], MPI forces developers to orchestrate parallel computation and manually keep memory consistent across nodes through low-level send/receive APIs, which leads to complex applications [4]. Partitioned global address space (PGAS) languages like Unified Parallel C [9] and X10 [7] provide language, compiler and runtime features for a shared memory-esque abstraction on clusters.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Traditionally, developers have used the message passing interface (MPI) to distribute execution across nodes [11]. Deemed the "assembly language of parallel processing" [18], MPI forces developers to orchestrate parallel computation and manually keep memory consistent across nodes through low-level send/receive APIs, which leads to complex applications [4]. Partitioned global address space (PGAS) languages like Unified Parallel C [9] and X10 [7] provide language, compiler and runtime features for a shared memory-esque abstraction on clusters.…”
Section: Related Workmentioning
confidence: 99%
“…Because the DSM is implemented transparently by the OS, existing sharedmemory applications can execute across nodes unmodified. The complete details of Popcorn Linux can be found in past works [3,4,15,16,25].…”
Section: Design and Implementationmentioning
confidence: 99%
See 1 more Smart Citation
“…Obviously, the hypervisor of a disaggregated rack should be distributed, meaning that each resource blade runs a piece of the hypervisor. Building on existing OS paradigms, two main approaches can be envisioned for the construction of such a distributed hypervisor, namely multi-hypervisor (analog to multi-kernel [2,3]) and split-hypervisor (analog to split-kernel [30]). In the former, the hypervisor is a collection of several complete hypervisors.…”
Section: Which Software Infrastructure?mentioning
confidence: 99%
“…The question is how to guarantee certain performance levels for this path. Several solutions can be adopted including: (i) building CPU blades with high memory cache sizes (gigabytes) and optimizing the hypervisor to efficiently exploit this local cache [29,30]; (2) building dedicated network links between CPU and memory blades; (3) allowing bandwidth reservations on a shared interconnect.…”
Section: Long-term Challengesmentioning
confidence: 99%