2013 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS) 2013
DOI: 10.1109/samos.2013.6621127
|View full text |Cite
|
Sign up to set email alerts
|

TimeCube: A manycore embedded processor with interference-agnostic progress tracking

Abstract: Abstract-Recently introduced processors such as Tilera's Tile Gx100 and Intel's 48-core SCC have delivered on the promise of high performance per watt in manycore processors, making these architectures ostensibly as attractive for low-power embedded processors as for cloud services. However, these architectures space-multiplex the microarchitectural resources between many threads to increase utilization, which leads to potentially large and varying levels of interference. This decorrelates CPU-time from actual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2013
2013
2017
2017

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 41 publications
(41 reference statements)
0
6
0
Order By: Relevance
“…Further, these techniques are either dependent on the replacement policy [36,37] or require modifying the cache tag arrays [33,35]. Similarly, cache partitioning techniques [30,29,10,31] incur significant overhead due to larger associative (up to 128/256-way) tag structures, or require modification to the replacement policies to adapt to their needs [31] [30]. From these discussions, we see that a simple, efficient and scalable cache monitoring mechanism is required.…”
Section: Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…Further, these techniques are either dependent on the replacement policy [36,37] or require modifying the cache tag arrays [33,35]. Similarly, cache partitioning techniques [30,29,10,31] incur significant overhead due to larger associative (up to 128/256-way) tag structures, or require modification to the replacement policies to adapt to their needs [31] [30]. From these discussions, we see that a simple, efficient and scalable cache monitoring mechanism is required.…”
Section: Motivationmentioning
confidence: 99%
“…Cache partitioning techniques [7,8,6,10] focus on allocating fixed-number of ways per set to competing applications. Typically, a shadow tag structure (that exploits stack property of LRU [26]) [6] monitors the application's cache utility by using counters to record the number of hits each recency-position in the LRU stack receives.…”
Section: Cache Partitioning Techniquesmentioning
confidence: 99%
“…TimeCube [26] tracks application progress using an analytical performance estimation model similar to the one proposed by Solihin et al [27]. These models are able to model minute architectural details such as tracking prefetches, measure memory bandwidth constraints, and cache intricacies such as dirty lines, via mechanisms like those proposed by Kaseridis et al [28].…”
Section: Related Workmentioning
confidence: 99%
“…DR-SNUCA has physically separated cache-arrays connected through a point-to-point pipelined on-chip memory network, as shown in Figure 2, which are dynamically allocated to applications. We use the cache allocation algorithm presented in TimeCube [4]. When multiple cache-arrays are allocated to an application, they are merged by increasing the number of cache sets allocated to the application while keeping the number of associative ways constant.…”
Section: Number Of 128kb Cache−arraysmentioning
confidence: 99%
“…Moreover, it provides no intuition about the benchmarks that we have not included in our evaluation. In order to limit the evaluation space as well as incorporate a structure into our evaluation, we classify our benchmarks by memory characteristics into a three-type taxonomy [4], and then examine runs that include different ratios of the three types. The taxonomy is as follows: An application which sees no drop in miss rate with increasing cache size is a stream application, an application which sees a sudden drop in miss rate with cache size is a cliff application, and an application whose miss rate drops gradually with increasing cache size is a slope application.…”
Section: Dr-snuca Evaluationmentioning
confidence: 99%