Abstract:Cloud computing provides access to shared resources through Internet. It provides facilities such as broad access, scalability and cost savings for users. However, cloud data centers consume a significant amount of energy because of inefficient resources allocation. In this paper, a novel virtual machine consolidation technique is presented based on energy and temperature in order to improve QoS (Quality of Service). In this paper, two heuristic and meta-heuristic algorithms are provided called HET-VC (Heurist… Show more
“…A proactive smoothing has improved unnecessary migration by approximately 16 to 49% with this approach. Yavari et al [6] have focused on energy optimization to maximize resource management. Two heuristics algorithms HET-VC and FET-VC were proposed.…”
Many problems in cloud computing are not solvable in polynomial time and only option left is to choose approximate solution instead of optimum. Virtual Machine placement is one of such problem with resource constraints in which overall objective is to optimize multiple resources of hosts during placement process. In this paper we have addressed this problem with large size NP-Hard instances and proposed novel local search-based approximation algorithm. This problem is not yet studied in the research community with NP hard instances. A new proposed algorithm is empirically evaluated with state-of-the-art techniques. and our algorithm has improved placement result by 18% in CPU utilization, 21% in resource contention and 26% in overall resource utilization for benchmark instances collected from azure private cloud data center.
“…A proactive smoothing has improved unnecessary migration by approximately 16 to 49% with this approach. Yavari et al [6] have focused on energy optimization to maximize resource management. Two heuristics algorithms HET-VC and FET-VC were proposed.…”
Many problems in cloud computing are not solvable in polynomial time and only option left is to choose approximate solution instead of optimum. Virtual Machine placement is one of such problem with resource constraints in which overall objective is to optimize multiple resources of hosts during placement process. In this paper we have addressed this problem with large size NP-Hard instances and proposed novel local search-based approximation algorithm. This problem is not yet studied in the research community with NP hard instances. A new proposed algorithm is empirically evaluated with state-of-the-art techniques. and our algorithm has improved placement result by 18% in CPU utilization, 21% in resource contention and 26% in overall resource utilization for benchmark instances collected from azure private cloud data center.
“…These other issues are as important as that of two major issues in replication [145]. Hence, the total of four important issues of any data replication strategy is determined as ( 1) what data should be replicated, (2) where to place a new replica, (3) when a replica should be created or deleted, and (4) how many replicas to create [52].…”
Section: Challenges Of Dynamic Replication Strategies In Cloudsmentioning
confidence: 99%
“…It is recognized as web-based administration of configurable, parallel, and adaptive systems and has advanced as a most recent approach for accessing, managing, and controlling the massive, distributed data at various geographical areas. The main purpose of cloud computing is to provide a simplified and proficient on-demand network access, along with service to a pool of shared virtualized processing assets based on a pay-asyou-go agreement [1][2][3][4]. Besides providing data availability, it additionally improves load balancing, fault tolerance, and scalability.…”
Data replications effectively replicate the same data to various multiple locations to accomplish the objective of zero loss of information in case of failures without any downtown. Dynamic data replication strategies (providing run time location of replicas) in clouds should optimize the key performance indicator parameters, like response time, reliability, availability, scalability, cost, availability, performance, etc. To fulfill these objectives, various state-of-the-art dynamic data replication strategies has been proposed, based on several criteria and reported in the literature along with advantages and disadvantages. This paper provides a quantitative analysis and performance evaluation of target-oriented replication strategies based on target objectives. In this paper, we will try to find out which target objective is most addressed, which are average addressed, and which are least addressed in target-oriented replication strategies. The paper also includes a detailed discussion about the challenges, issues, and future research directions. This comprehensive analysis and performance evaluation based-work will open a new door for researchers in the field of cloud computing and will be helpful for further development of cloud-based dynamic data replication strategies to develop a technique that will address all attributes (Target Objectives) effectively in one replication strategy.
“…M. Yavari et al [28] have presented a consolidation-based technique that considers both the temperature and the energy of servers. The authors are concerned with minimizing the amount of heat emitted from servers and improving their utilization.…”
Section: B Power-efficiency Based On Consolidationmentioning
The increasing growth in the demand for cloud computing services, due to the increasingly digital transformation and the high elasticity of the cloud, requires more efforts to improve the electrical energy efficiency of cloud data centers. In this paper, an energy-efficient hybrid (EEH) framework for improving the efficiency of consuming electrical energy in data centers is proposed and evaluated. The proposed framework is based on both the requests' scheduling and servers' consolidation approaches rather than depending on only one approach as in existing related works. The EEH framework sorts the customers' requests (tasks) according to their time and power needs before performing the scheduling. It has a scheduling algorithm that considers power consumption when taking its scheduling decisions. It also has a consolidation algorithm that determines the underloaded servers to be slept or hibernated, the overloaded servers, the virtual machines to be migrated and the servers that will receive migrated virtual machines. In addition, the EEH framework includes a migration algorithm for transferring migrated virtual machines to new servers. Results of simulation experiments indicate the superiority of the EEH framework over approaches that depend on using only one approach to reduce power consumption in terms of Power Usage Effectiveness (PUE), Data Center Energy Productivity (DCEP), average execution time, throughput and cost-saving. Index terms-green computing, scheduling, consolidation, power consumption.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.