Today cloud computing is at the heart of all information technologies. This prodigious technological paradigm relies on a very simple concept defined as the ability to deliver hardware and software resources as service directly over internet. A set of mechanisms cooperate to maintain the cloud reliability and to allow continuous delivery of these services while guaranteeing the same quality of service (QoS) and respecting the service-level agreement (SLA) for each client. Load balancing is one of those mechanisms and it ensures a crucial service, it can be defined as the ability of the system to ensure fairness in the distribution of workload over all servers. The most recent load balancing techniques are hybrid methods involving in major cases the combination between static and dynamic approaches, in other cases it can go further by integrating other mechanisms in order to improve the overall efficiency of the systems. The performance can be evaluated by parameters which generally refers to the degree of compliance with SLA and QoS. In order to enhance load balancing and tasks scheduling in cloud environment we propose in this paper a different hybrid approach which allows the decomposition of the problem and to operate on two levels by going through two stages : (i) first clusters are built for each datacenter grouping together sub-sets of servers that have close utilization rates, (ii) then tasks scheduling and load balancing operate at the datacenter level to deal with distribution over clusters and at the cluster level to ensure fairness between servers of the same cluster. Our method allows hot-deployment in already operating cloud environments and an excellent scalability. It also offers decoupling of missions and strong interoperability between the different mechanisms. To prove its validity, we implemented it on the standard cloudSim plus simulator, before carrying out a comparative study which shows better results than existing approaches in terms of makespan, reduces reaction time, number of migrations required and SLA violations.
Climate change and global warming are the greatest harmful consequences of human activity. Indeed, Human technological development and demographic growth has been accompanied by globalization. In this new era, the logistics revolution has removed borders and industrial value chains have been distributed all over the world. It is becoming difficult to attribute responsibilities for ecological impacts accurately, since consumers are not necessarily in the same area as the countries of manufacture or extraction of raw materials. At the same time, a new paradigm called the Internet of Things has come to reshape our relationship to the world. The smallest everyday objects are connected to the Internet and allow acquisition of large amount of data, interact with cyberspace and in some cases act on the real world in an automated way. On the other hand, the last decade has also seen the rise of a new technology for storing transactions data in a reliable and consented way. Blockchain has changed the way operations are made based on the concept of distributed ledger giving a new breath to the economy. They have also dived into companies business by allowing private implementations with per-missioned blockhains. The aim of this work is to take advantage of the power of the two technologies combined – the Internet of Things provides wide possibilities when it comes to collecting information and establishing controls on the one hand, and blockchain provides very good means to ensure traceability while respecting confidentiality and privacy – in order to create a global architecture for a digital ecological impact calculator respecting privacy. In order to validate our proposal we first modeled it with a Petri network in order to verify its functioning, then we implemented it in order to measure average running time of each major step: for operations relative to certification authority we have obtained 0.342 second per operation, 0.342 second per operation concerning production and buying actions reporting and 0.299 second par scoring operation. Finally we used these outputs to build a queue model in order to check if the proposed architecture steady-state does not change over time. As results we showed that for the simplest form of queue model the servers of our architecture have a utilization rate that is close to 50% and that the overall waiting time remains below one minute, on the other hand with the Petri net we have proved from the marking graph that GIEFC performs the expected tasks according to the described specifications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.