2018
DOI: 10.1109/tcad.2018.2857379
|View full text |Cite
|
Sign up to set email alerts
|

Bounding DRAM Interference in COTS Heterogeneous MPSoCs for Mixed Criticality Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
31
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 36 publications
(32 citation statements)
references
References 24 publications
1
31
0
Order By: Relevance
“…However, we believe it is likely that the hypothesis holds on other platforms as well, since several previous studies have highlighted that worst-case delays are generated when hardware request queues saturate [63,12], and maximizing concurrent activity of all cores increases the probability of such occurrence. Furthermore, in case the hypothesis does not hold, but a precise model of main memory is available, we argue that a more complex analysis, along the lines of [35,73], could be used to bound the maximum delay suffered by the kernel. Third, our approach requires modifications to the code of each GPU kernel.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we believe it is likely that the hypothesis holds on other platforms as well, since several previous studies have highlighted that worst-case delays are generated when hardware request queues saturate [63,12], and maximizing concurrent activity of all cores increases the probability of such occurrence. Furthermore, in case the hypothesis does not hold, but a precise model of main memory is available, we argue that a more complex analysis, along the lines of [35,73], could be used to bound the maximum delay suffered by the kernel. Third, our approach requires modifications to the code of each GPU kernel.…”
Section: Discussionmentioning
confidence: 99%
“…Contention in shared memory among multiple processing elements can lead to a very high increase in memory access latency [44,63,35]. The Predictable Execution Model (PREM), first proposed in [53], introduces a method for prevent contention in the shared memory of multi-core platforms.…”
Section: Software Solutionsmentioning
confidence: 99%
“…The scope of this survey is restricted to timing verification techniques for multi-core and manycore platforms that specifically consider the impact of shared hardware resources. The following areas of related research are outside of the scope of this survey: (i) works that introduce isolation mechanisms for multi-core systems, but rely on existing analysis for timing verification (examples include the work on the MERASA [94] and T-CREST projects [111], as well as work on shared cache management techniques (surveyed in [46]) and cache coherence [47,56]; (ii) works that introduce mechanisms and analyses of the worst-case latencies for a particular component, for example predictable DDR-DRAM memory controllers 4 (comparative studies in [49,57]), but rely on these latencies being incorporated into existing analyses for timing verification; (iii) scheduling and schedulability analyses for multiprocessor systems that consider only a simple abstract model of task execution times (surveyed in [38]); (iv) multiprocessor software resource sharing protocols; (v) timing verification techniques for many-core systems with a Network-on-Chip (NoC) (surveyed in [59,70]) which consider only the scheduling of the NoC, or consider that tasks on each core execute out of local memory with the only interaction with packet flows being through a consideration of release jitter; (vi) measurement-based and measurement-based probabilistic timing analysis methods; (vii) research that focuses on timing verification of single-core systems. Further, the survey does not cover specific research into multi-cores with GPGPUs or re-configurable hardware.…”
Section: Related Areas Of Research and Restrictions On Scopementioning
confidence: 99%
“…These efforts follow two major directions. The first direction is to analyze existing memory controllers used in conventional high-performance systems to upper bound the latency suffered by any request upon accessing DDRx main memory [3], [4], [22]. Following a similar direction, [23] targets to bound DRAM interference in conventional platforms by enforcing bank partitioning at the operating system level.…”
Section: Related Workmentioning
confidence: 99%