Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE 2010) 2010
DOI: 10.1109/date.2010.5456951
|View full text |Cite
|
Sign up to set email alerts
|

Bounding the shared resource load for the performance analysis of multiprocessor systems

Abstract: Predicting timing behavior is key to reliable realtime system design and verification, but becomes increasingly difficult for current multiprocessor systems on chip. The integration of formerly separate functionality into a single multicore system introduces new inter-core timing dependencies, resulting from the common use of the now shared resources. In order to conservatively bound the delay due to the shared resource accesses, upper bounds on the potential amount of conflicting requests from other processor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
56
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 51 publications
(64 citation statements)
references
References 18 publications
0
56
0
Order By: Relevance
“…In this context, some previous work considers the entire memory system as a single resource, such that a processor core requests this resource when it generates a cache miss and it must hold this resource exclusively until the data of the cache miss are delivered to the processor core that requested it [32,5,9,37,23]. They commonly assumed that each memory request takes a constant service time and memory requests from multiple cores are serviced in the order of their arrival time.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this context, some previous work considers the entire memory system as a single resource, such that a processor core requests this resource when it generates a cache miss and it must hold this resource exclusively until the data of the cache miss are delivered to the processor core that requested it [32,5,9,37,23]. They commonly assumed that each memory request takes a constant service time and memory requests from multiple cores are serviced in the order of their arrival time.…”
Section: Related Workmentioning
confidence: 99%
“…Previous studies on bounding memory interference delay [9,43,32,37,5] model main memory as a blackbox system, where each memory request takes a constant service time and memory requests from different cores are serviced in either Round-Robin (RR) or First-Come FirstServe (FCFS) order. This memory model, however, is not safe for commercial-off-the-shelf (COTS) multi-core systems because it hides critical details necessary to place an upper This material is based upon work funded and supported by the Department of Defense under Contract No.…”
Section: Introductionmentioning
confidence: 99%
“…To provide timing guarantees, several researchers, such as Pellizzoni et al [9], [10] and Schliecker et al [13], [14], have recently proposed methodologies to analyze the worstcase delay a task suffers due to accesses to a shared bus and shared memory, assuming synchronous resource accesses. Specifically, in [9], [10], a framework is developed to analyze the maximum delay that a task may suffer due to peripheral interference.…”
Section: Introductionmentioning
confidence: 99%
“…This approach enables detailed system modelling, but is also prone to the problem of state-space explosion. Schliecker et al (2010) proposed a method that employs a general event-based model to estimate the maximum load on a shared resource. This approach makes few assumptions about the task model and is thus quite generally applicable; however, it only supports a single unspecified work-conserving bus arbiter.…”
Section: Related Work With a Focus On The Memory Busmentioning
confidence: 99%