2020 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2020
DOI: 10.23919/date48585.2020.9116265
|View full text |Cite
|
Sign up to set email alerts
|

Cache Persistence-Aware Memory Bus Contention Analysis for Multicore Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 14 publications
0
11
0
Order By: Relevance
“…Other approaches: In 2020, proposals were implemented focused on evaluating the contentions in the memory bus. The method proposed by Rashid et al [40] evaluates the impact of cache persistence on memory bus contention, considering that the higher the reuse of cache blocks, the lower the number of accesses to main memory and, consequently, less will be the contention on the memory bus. On the other hand, Restuccia et al [41] focus on bounding the worstcase bus contention experienced by the hardware accelerators deployed in the FPGA fabric, considering that the main source of the unpredictability of hardware accelerators is access to memory through the bus.…”
Section: ) Proposal Focused On Achieving a Better Performancementioning
confidence: 99%
See 1 more Smart Citation
“…Other approaches: In 2020, proposals were implemented focused on evaluating the contentions in the memory bus. The method proposed by Rashid et al [40] evaluates the impact of cache persistence on memory bus contention, considering that the higher the reuse of cache blocks, the lower the number of accesses to main memory and, consequently, less will be the contention on the memory bus. On the other hand, Restuccia et al [41] focus on bounding the worstcase bus contention experienced by the hardware accelerators deployed in the FPGA fabric, considering that the main source of the unpredictability of hardware accelerators is access to memory through the bus.…”
Section: ) Proposal Focused On Achieving a Better Performancementioning
confidence: 99%
“…Table 6 shows a summary of the main approaches to reduce interference on the shared memory bus. [157] Bandwidth Regulator Memory bandwidth allocation Software HRT/SRT [158] Bandwidth Regulator Prioritize soft real-time applications Software SRT [32] Bandwidth Regulator Focuses on partitioned scheduling Software SRT [160] Bandwidth Regulator Budget-based memory bandwidth regulation Software SRT [159] Bandwidth Regulator Dynamic memory bandwidth allocation None HRT [33], [161] Bandwidth Regulator Re-design of memory controllers Hardware HRT/SRT [34] Bandwidth Regulator New memory controller architecture Hardware SRT [35] Bandwidth Regulator Memory Inter-Arrival Time Traffic Shaping Hardware SRT [36] Bandwidth Regulator Bandwidth Regulation Unit (BRU) Hardware SRT [168] Offline Scheduling Execution model composed of rules Soft/Hard HRT [167] Offline Scheduling Scheduling table Software HRT [37], [163] Phased Execution Model Predictable Execution Model Software SRT [165] Phased Execution Model Acquisition Execution Restitution Software HRT [166] Phased Execution Model Execution profile and resource allocation Hard/Soft SRT [170] Hardware isolation Arbitration policy Hardware SRT [38], [171] Hardware isolation TDMA arbitration policy Hardware HRT [173], [174] Hardware isolation Time Division Multiplexing (TDM) Hardware SRT [39] Hardware isolation Multi-TDMA model Hardware HRT [31] Bandwidth Regulator Memory bandwidth reservation mechanism Software AVG [40] Other Approaches Persistence-aware bus contention analysis None AVG [41] Other Approaches Bounding the worst-case bus contention Hardware AVG…”
Section: ) Summarymentioning
confidence: 99%
“…Tasks are partitioned to cores at design-time and cannot migrate to any other core at run-time. Similarly to existing works [7,10,12,13,14,17,16], we assume a single-channel shared memory bus that connects all the cores to the main memory and the memory bus can only handle one memory phase 2 at a time, i.e., only one task can access the main memory at a time. A memory phase cannot be preempted once it accesses the memory bus to perform memory transactions.…”
Section: System Modelmentioning
confidence: 99%
“…A general framework for memory bus contention analysis that covers a wide range of bus arbitration policies is proposed in [14]. Rashid et al [16] proposed the cache persistence-aware memory bus contention analysis for multicore systems. Even though these approaches are proposed to bound the bus contention for partitioned fixed-priority scheduling, they are proposed for generic task models and are not tailored for phased execution models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation