A storage device that is used mainly in the Hadoop system, HDD, has the slow transfer speed and long latency because of its physical limit. In addition, even though the SSD is the only alternative to traditional HDD, the price of it is too expensive. In order to improve this problem, some distributed file systems use SDD as a cache of HDD. In this paper, in the distributed file system that uses SSD as cache, we propose a mechanism that acquires a block ID that is scheduled to be used or already in use and loads it in SSD cache before the I/O request about the block is made in order to increase the performance of SSD cache during performance of Hadoop MapReduce task. In order to verify the reduction of MapReduce processing time and the improvement of Read I/O performance, which were run based on the proposed mechanism, experiments were carried out via Hadoop Benchmark tool. We could verify through the experiments that the performance of Hadoop MapReduce system was about 11% improved in the system where the proposed mechanism was applied than in the distributed file system where the existing SSD cache was applied. CCS Concepts• Computer systems organization➝Distributed architectures;• Software and its engineering➝Software performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.