Abstract:In embedded systems, SPM (scratchpad memory) is an attractive alternative to cache memory due to its lower energy consumption and higher predictability of program execution. This paper studies the problem of placing variables of a program into an SPM such that its WCET (worst-case execution time) is minimized. We propose an efficient dynamic approach that comprises two novel heuristics. The first heuristic iteratively selects a most beneficial variable as an SPM resident candidate based on its impact on the k … Show more
“…The approaches proposed in [18,19] Next, we propose a new approach for determining the maxi mum size of the SPM for each task based on our previous work on allocating variables of a single task to SPM [23]. For each variable Vi, we define a benefit vector benefit( Vi) as follows.…”
Section: -3mentioning
confidence: 99%
“…For more details on selecting a most beneficial variable and allocating it to SPM, we refer to [23] .…”
Abstract-We propose a unified approach to the problem of scheduling a set of tasks with individual release times, deadlines and precedence constraints, and allocating the data of each task to the SPM (Scratchpad Memory) on a single processor system.Our approach consists of a task scheduling algorithm and an SPM allocation algorithm. The former constructs a feasible schedule incrementally, aiming to minimize the number of preemptions in the feasible schedule. The latter allocates a portion of the SPM to each task in an efficient way by employing a novel data structure, namely, the preemption graph. We have evaluated our approach and a previous approach by using six task sets. The results show that our approach achieves up to 20.31 % on WCRT (Worst-Case Response Time) reduction over the previous approach.
“…The approaches proposed in [18,19] Next, we propose a new approach for determining the maxi mum size of the SPM for each task based on our previous work on allocating variables of a single task to SPM [23]. For each variable Vi, we define a benefit vector benefit( Vi) as follows.…”
Section: -3mentioning
confidence: 99%
“…For more details on selecting a most beneficial variable and allocating it to SPM, we refer to [23] .…”
Abstract-We propose a unified approach to the problem of scheduling a set of tasks with individual release times, deadlines and precedence constraints, and allocating the data of each task to the SPM (Scratchpad Memory) on a single processor system.Our approach consists of a task scheduling algorithm and an SPM allocation algorithm. The former constructs a feasible schedule incrementally, aiming to minimize the number of preemptions in the feasible schedule. The latter allocates a portion of the SPM to each task in an efficient way by employing a novel data structure, namely, the preemption graph. We have evaluated our approach and a previous approach by using six task sets. The results show that our approach achieves up to 20.31 % on WCRT (Worst-Case Response Time) reduction over the previous approach.
“…In the past decade, many studies have been done to exploit optimizing the energy consumption ( [16], [18], [7], [8]) and performance ( [13], [15], [20], [6], [19]) by using SPM in embedded systems.…”
Section: Related Workmentioning
confidence: 99%
“…Some researches focused on worst case execution time with improved timing predictability of the application by using SPM ( [5], [15], [19]). Deverge et al [5] dynamically allocated both static and automatic data to SPM in order to reduce a single task's WCET.…”
Section: Related Workmentioning
confidence: 99%
“…Suhendra et al [15] proposed SPM allocation techniques for data memory to minimize a task's WCET. Wan et al [19] studied the problem of placing variables of a program into an SPM to minimize WCET. Verma et al [17] proposed Cache Aware Scratchpad Allocation (CASA) algorithm to use the SPM for storing instruction, they generated traces and then created the conflict graph to represent conflict cache misses.…”
The memory subsystem is the performance bottleneck for data intensive applications, which makes it a key consideration in high-performance embedded system optimization. Onchip SRAMs including scratchpad memories (SPMs) and caches are widely used in embedded systems to narrow the speed gap between CPU and memory. However, many existing SPM data allocation algorithms are designed for architectures with pure SPM as on-chip SRAM. As a result, for off-the-shelf embedded processors with hybrid on-chip scratchpad and caches, these algorithms may not lead to an optimal overall performance due to the lack of consideration for cache behaviors. In this paper, we propose an comprehensive data allocation framework for the above-mentioned architectures. We formulate a cache-aware integer linear programming (ILP) model to identify possible memory objects to be allocated into SPM for average case execution time improvement. The impact of SPM allocation on cache interferences are captured to reduce the overall number of slow off-chip memory accesses. Experimental results show that such a hybrid on-chip SRAM organization outperforms the pure cache or SPM architecture with our data allocation mechanism. We evaluate the execution cycles via selecting dataintensive benchmarks for different SPM-cache size combinations, where up to 25.4% total execution cycle reduction is achieved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.