2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA) 2015
DOI: 10.1109/hpca.2015.7056034
|View full text |Cite
|
Sign up to set email alerts
|

Prediction-based superpage-friendly TLB designs

Abstract: International audience—This work demonstrates that a set of commercial and scale-out applications exhibit significant use of superpages and thus suffer from the fixed and small superpage TLB structures of some modern core designs. Other processors better cope with superpages at the expense of using power-hungry and slow fully-associative TLBs. We consider alternate designs that allow all pages to freely share a single, power-efficient and fast set-associative TLB. We propose a prediction-guided multi-grain TLB… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(52 citation statements)
references
References 21 publications
0
52
0
Order By: Relevance
“…Huge Pages [1,5] increase the TLB reach and reduce the performance overhead of page walks [13,15,36,41] by mapping a large fixed size region of memory with a single TLB entry [27,40,45,48]. For example, the x86-64 architecture allows a process to use 4 KB pages with 2 MB pages and 1 GB pages at the same time.…”
Section: Trends In Tlbsmentioning
confidence: 99%
See 1 more Smart Citation
“…Huge Pages [1,5] increase the TLB reach and reduce the performance overhead of page walks [13,15,36,41] by mapping a large fixed size region of memory with a single TLB entry [27,40,45,48]. For example, the x86-64 architecture allows a process to use 4 KB pages with 2 MB pages and 1 GB pages at the same time.…”
Section: Trends In Tlbsmentioning
confidence: 99%
“…However, these energy optimization techniques do not take into account hardware support for increasing the TLB reach (e.g., huge pages), that primarily targets improving performance and reducing static energy overheads due to TLB misses. Only the recent work on TLB Pred [41] considers huge pages for improving the dynamic energy efficiency in TLBs. The performance of TLB Pred depends on huge pages successfully reducing misses, but prior work shows that huge pages can still incur high performance overheads due to TLB misses [13,15,36].…”
Section: Introductionmentioning
confidence: 99%
“…Alternately, based on the observation that the majority of pages accessed per core on systems with superpage support are the smallest pages [38], we could avoid classifying superpages and consider them as coherent without significantly damaging system performance, while completely avoiding large pages overhead. Current systems supporting multiple page sizes implement multiple TLB structures to do so, thus the TLB actually storing large pages could classify them as shared without extra hardware support.…”
Section: Large Pagesmentioning
confidence: 99%
“…), which map to contiguous physical frames. Superpages improve TLB coverage, and can be stored in the same multi-grain TLB, or in a separate, single, full-associative TLB [58].…”
Section: Tlb Overviewmentioning
confidence: 99%
“…Alternately, based on the observation that the majority of pages accessed per core on systems with superpage support are the smallest pages [59], we could avoid classifying superpages and consider them as coherent without significantly damaging system performance, while completely avoiding large pages overhead. Current systems supporting multiple page sizes implement multiple TLB structures to do so, thus the TLB actually storing large pages could classify them as shared without extra hardware support.…”
Section: Large Page Sizesmentioning
confidence: 99%