Soramichi AKIYAMA †a) , Member SUMMARY The latency and the energy consumption of DRAM are serious concerns because (1) the latency has not improved much for decades and (2) recent machines have huge capacity of main memory. Device-level studies reduce them by shortening the wait time of DRAM internal operations so that they finish fast and consume less energy. Applying these techniques aggressively to achieve approximate memory is a promising direction to further reduce the overhead, given that many data-center applications today are to some extent robust to bit-flips. To advance research on approximate memory, it is required to evaluate its effect to applications so that both researchers and potential users of approximate memory can investigate how it affects realistic applications. However, hardware simulators are too slow to run workloads repeatedly with different parameters. To this end, we propose a lightweight method to evaluate effect of approximate memory. The idea is to count the number of DRAM internal operations that occur to approximate data of applications and calculate the probability of bit-flips based on it, instead of using heavy-weight simulators. The evaluation shows that our system is 3 orders of magnitude faster than cycle accurate simulators, and we also give case studies of evaluating effect of approximate memory to some realistic applications. key words: approximate memory, computer architecture, memory systems
Live migration of virtual machines over a wide area network has many use cases such as cross-datacenter load balancing, low carbon virtual private clouds, and disaster recovery of IT systems. An efficient wide area live migration method is required because cross-datacenter connections have a narrow bandwidth. Page cache occupies a large portion of the memory of a Virtual Machine (VM) when it executes dataintensive workloads. We propose a new live migration technique, page cache teleportation, which reduces the total migration time of wide area live migration and has a low overhead. It detects the restorable page cache in the guest memory that has the same contents as the corresponding disk blocks. The restorable page cache is not transferred via the WAN but is restored from the disk image before the VM resumes. In this way, the IO performance degradation reduces after the migration. Evaluations show that page cache teleportation reduces the total migration time of wide area live migration and has a lower performance overhead than existing approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.