2012 45th Annual IEEE/ACM International Symposium on Microarchitecture 2012
DOI: 10.1109/micro.2012.35
|View full text |Cite
|
Sign up to set email alerts
|

Addressing End-to-End Memory Access Latency in NoC-Based Multicores

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…So, these metrics are not very accurate for ranking application criticality. Recently proposed network scheduling schemes in [18] aim to bring uniformity in network latencies and main memory bank utilization by applying separate rules to request and response packets. However, the techniques are application-oblivious and can lead to increased delays in the system while trying to equalize packet latencies.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…So, these metrics are not very accurate for ranking application criticality. Recently proposed network scheduling schemes in [18] aim to bring uniformity in network latencies and main memory bank utilization by applying separate rules to request and response packets. However, the techniques are application-oblivious and can lead to increased delays in the system while trying to equalize packet latencies.…”
Section: Related Workmentioning
confidence: 99%
“…Some of the standard and previously proposed anti-starvation strategies are discussed below. In [3][4] [18], time-based (TB) batching is employed to enable fairness among application packets. In time-based batching, a packet is tagged with a batch id that is a function of the time when it is injected into the NoC.…”
Section: Fairness Issues In Packet Schedulingmentioning
confidence: 99%
“…Memory-centric NoC architectures with real chip implementations are studied in [35]. Recent work [36] tried to reduce end-to-end memory access latency in NoC-based multicore system by prioritizing memory response packets and messages destined for idle memory banks. Memory placement mechanisms in NUMA systems such as page migration [37][38][39], interleaving [2] and replication [2,40] have been studied for decades.…”
Section: Optimization Of Memory Access Trafficmentioning
confidence: 99%
“…Different from them, we focus on balancing the latencies amongst the on-chip memory accesses. In [11], Sharifi addressed balancing latencies of on-chip memory accesses by prioritizing memory response packets such that, in a given period of time,packets that experience higher latencies than the average packet latency of that application are expedited. However, they divided a whole memory access into two parts: outward trip for memory request (read or write) and return trip for memory response (read data or write acknowledgement) and treated them separately.…”
Section: Introduction Motivation and Related Workmentioning
confidence: 99%