In this paper, we address power-aware scheduling of periodic tasks to reduce CPU energy consumption in hard real-time systems through dynamic voltage scaling. Our intertask voltage scheduling solution includes three components: (a) a static (off-line) solution to compute the optimal speed, assuming worst-case workload for each arrival, (b) an online speed reduction mechanism to reclaim energy by adapting to the actual workload, and (c) an on-line, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the average-case workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that our reclaiming algorithm alone outperforms other recently proposed inter-task voltage scheduling schemes. Our speculative techniques are shown to provide additional gains, approaching the theoretical lower-bound by a margin of 10%.
Real-time systems are being increasingly used in several applications which are time critical in nature. Fault-tolerance is an important requirement of such systems, due to the catastrophic consequences of not tolerating faults. In this paper, we study a scheme that provides fault-tolerance through scheduling in real-time multiprocessor systems. We schedule multiple copies of dynamic, aperiodic, nonpreemptive tasks in the system, and use two techniques that we call deallocation and overloading to achieve high acceptance ratio (percentage of arriving tasks scheduled by the system). This paper compares the performance of our scheme with that of other fault-tolerant scheduling schemes, and determines how much each of deallocation and overloading affects the acceptance ratio of tasks. The paper also provides a technique that can help real-time system designers determine the number of processors required to provide fault-tolerance in dynamic systems. Lastly, a formal model is developed for the analysis of systems with uniform tasks.
In this paper, we provide a n efficient solution for periodic real-time tasks with (potentially) different power consumption characteristics. We show that, a task T, can run a t a constant speed 5';. at every instance without hurting optimality. We sketch an O(n2 log n ) algorithm to compute the optimal S;. values. We also prove that the EDF (Earliest Deadline First) scheduling policy can be used to obtain a feasible schedule with these optimal speed values.
The introduction of Phase-Change Memory (PCM) [6]. However, PCM poses challenges that have to be addressed for it to be used as a main memory replacement. Specifically, PCM suffers from limited endurance (i.e., it wears out due to write operations) and expensive write operations (i.e., high latency and energy). Indeed, too many writes to a PCM main memory will lead to a short device lifetime, poor performance and high energy consumption.Other memory technologies also suffer from limited endurance and expensive writes -Flash [7], [8] is the most common example. Much attention has been given to improve Flash memory lifetime and performance [7], [9]. As an example, Mylavarapu et al. introduce algorithms that avoid erase operations and apply wear leveling (WL) to enhance lifetime and performance [7]. Because the problems associated with Flash endurance/performance [8], [9] are different from the ones for PCM, Flash WL algorithms are of limited use in a PCM main memory. Flash WL algorithms [7] avoid erasing a page on every write by allocating a new clean physical page. This allocation is unnecessary for PCM due to its bit-addressability. PCM also has a larger overall endurance (i.e., it can sustain 10 7 writes rather than the 10 4 to 10 6 writes for Flash) and does not require predefined blocking.The use of PCM in main memory has recently been proposed with techniques applied to increase PCM lifetime. The PCM storage device presented in [10] implements a read-before-write (RW) loop at the bit level to improve reliability and extend lifetime. The work in [5] uses readbefore-write, row-level rotation (RL) and segment swapping (SS) as endurance enhancements at the device level. RL equalizes wear at the row level by rotating cache lines. SS is done by swapping two segments: the one currently being written and the one that is least-frequently-written (LFW). However, the large segment size (1MByte) used in SS [5] degrades lifetime compared to a small segment size because the distribution of writes to a large segment can be skewed. Nevertheless, large segments are used in [5] to reduce the costs associated with searching for the LFW segment during a swap.A system level approach is used in [3] to incorporate PCM in the memory hierarchy. This work proposes a hybrid memory, where a large PCM memory is augmented with a small DRAM that acts as a "page cache" for the PCM memory. The page cache helps performance by buffering frequently needed pages. It also helps endurance by reducing the number of writes to PCM with write combining and coalescing. Although the page cache filters writes to PCM, it does not fully mitigate the endurance problem. Additional techniques are applied at the cache line and block levels. At the cache line level, only the lines modified in a page are written to PCM. To avoid unbalanced damage from writes, cache lines are rotated on a page. Finally, swapping is used at the block level for wear leveling.In this paper, we propose three new approaches to address the endurance problem when PCM is used...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.