2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) 2017
DOI: 10.1109/etfa.2017.8247615
|View full text |Cite
|
Sign up to set email alerts
|

Memory interference characterization between CPU cores and integrated GPUs in mixed-criticality platforms

Abstract: The terms and conditions for the reuse of this version of the manuscript are specified in the publishing policy. For all terms of use and more information see the publisher's website.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
46
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 51 publications
(49 citation statements)
references
References 13 publications
3
46
0
Order By: Relevance
“…Pan et al [26] designed an LLC management strategy for better performance. Cavicchioli et al [4] studied different SoCs and fused CPU-GPU devices to characterize memory contention. Hill et al [10] extended the Roofline model for mobile SoCs to address memory contention from the perspective of PU BW usage.…”
Section: Related Work 81 Memory Contention Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Pan et al [26] designed an LLC management strategy for better performance. Cavicchioli et al [4] studied different SoCs and fused CPU-GPU devices to characterize memory contention. Hill et al [10] extended the Roofline model for mobile SoCs to address memory contention from the perspective of PU BW usage.…”
Section: Related Work 81 Memory Contention Studiesmentioning
confidence: 99%
“…Collocated kernel execution on an iSMHS will likely cause contention on the shared memory bus, and the resulting interference could negatively affect perceived bandwidth (BW) on collocated kernels. Several studies [4,7,15,37] focused on identifying the memory access patterns of collocated kernels in CPU+GPU iSMHS and suggested smart scheduling mechanisms to minimize contention effects, mostly via ad hoc approaches. However, these approaches do not provide a systematic solution for systems with different heterogeneity characteristics and an arbitrary number of PUs.…”
Section: Introductionmentioning
confidence: 99%
“…When both these clients access memory, contention at the level of the memory controller and within DRAM banks can cause a significant performance degradation to the observed applications, as concurrent access to memory devices is most commonly arbitrated with no real-time compliant mechanisms. In [9] a complete evaluation of the extent of memory contention is presented: Cavicchioli et al show that a GPU application can experience performance degradation up to 100% in case of intensive memory use from a CPU application and, specularly, performance degradation of a CPU application with an interfering GPU memory bounded activity can cause latencies increase of almost 6x. Tests were conducted in multiple commercial SoCs that features an integrated GPU, such as NVIDIA development boards (TX1 and TK1) and an Intel i7 processor.…”
Section: A Wcet Estimationmentioning
confidence: 99%
“…Changes in the GPU-RLM communication infrastructure means implementing signals at each scheduling event related to an application: new work has been submitted, previous work has been consumed by GPU engines and the event of server budget expiration (A, B and C respectively in gure 2). We are also investigating methodologies to mitigate memory interference between CPU and GPU in embedded SoCs [1,2,4,5].…”
Section: Future Work On Virtualizationmentioning
confidence: 99%