2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS) 2020
DOI: 10.1109/ipdps47924.2020.00063
|View full text |Cite
|
Sign up to set email alerts
|

Bandwidth-Aware Page Placement in NUMA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…When E = ∅, COLOURED-BIN-PACKING reduces to the standard bin packing problem (d = 1). Therefore, the problem is NP-complete, APX-hard and no algorithm can achieve a better approximation ratio than 3 2 (unless P=NP). It comes from the approximation hardness of standard bin packing through a reduction from the partition problem [18].…”
Section: E Colored Bin Packingmentioning
confidence: 99%
See 1 more Smart Citation
“…When E = ∅, COLOURED-BIN-PACKING reduces to the standard bin packing problem (d = 1). Therefore, the problem is NP-complete, APX-hard and no algorithm can achieve a better approximation ratio than 3 2 (unless P=NP). It comes from the approximation hardness of standard bin packing through a reduction from the partition problem [18].…”
Section: E Colored Bin Packingmentioning
confidence: 99%
“…In particular, the network infrastructure across Internet Service Providers consists of compute servers made up of different capacity constraints zones as part of the horizontal scaling. An example of such zones are the NUMA zones [3] which are used as a protection mechanism for the host and are practically being deployed as such on physical motherboards.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, to quantify the potential advantage of an ideal bandwidth balance, we return to our initial benchmark, and focus exclusively on the all reads workload -i.e., the best-case scenario for bandwidth balance. In contrast to the previous experiment, this time we distribute pages across tiers according to different ratios (100% in DRAM, 95%, 90%, ...), using weighted-interleaved placement [15]. Also, this time we vary the number of active threads to test different memory access rates.…”
Section: Bandwidth Balance Policymentioning
confidence: 99%
“…State of the art memory pages placement mechanisms interleave pages evenly across NUMA nodes. However, this approach fails to maximize memory throughput in NUMA systems characterized by asymmetric bandwidths and latencies, and sensitive to memory contention and interconnection congestion effects 9 . Different solutions have been proposed, like modifying the Linux kernel for improving load balancing and thread migration algorithms, changing memory pages placement policies, or profiling the executed program for obtaining the optimal thread placement, among others.…”
Section: Related Workmentioning
confidence: 99%
“…They have obtained important improvements in performance with PARSEC 3.0 benchmarks. Also, the most recent work by Gureya et al 9 proposes a novel page placement mechanism based on asymmetric weight page interleaving that combines an analytical performance model of the target NUMA systems with on‐line iterative tuning page distribution for a given memory‐intensive application. By only migrating memory pages and entrusting thread migration to the OS, they obtain up to a 66% performance improvement.…”
Section: Related Workmentioning
confidence: 99%