Abstract:Cloud resource demands, especially some unclear and emergent resource demands, are growing rapidly with the development of cloud computing, big data and artificial intelligence. The traditional cloud resource allocation methods do not support the emergent mode in guaranteeing the timeliness and optimization of resource allocation. This paper proposes a resource allocation algorithm for emergent demands in cloud computing. After building the priority of resource allocation and the matching distances of resource… Show more
“…Tang et al [8] propose an approach for OpenFlow network models, which is implemented on a time slot basis. Chen et al [31] consider the problem of LB in a multi-objective framework, where initially the problem of resource allocation for emergent demands is resolved. In [32] the authors present a loadbalancing framework with the objective of minimizing the operational cost of data centers using a genetic algorithm for resource allocation.…”
Load balancing techniques in cloud computing can be applied at three different levels: Virtual machine load balancing, task load balancing, and resource load balancing. At all levels, load balancing should also be implemented in an efficient manner, to increase system performance. In this paper, we propose a fair, in terms of added workload per VM, task load balancing strategy, that aims to improve the average response time and the makespan of the system in the cloud environment. The problem is formulated as an irreducible finite state Markov process, which is known to have a balance equation for each state. From the balance state probabilities we derive the expected utilizations for the virtual machines (VM), which play a vital role in our task allocation approach. In our model, the Load Balancer (LBer) acts as a central server, which uses our proposed fair task allocation scheme to distribute the incoming tasks in a fair, balanced manner among the virtual machines, taking into account their current state as well as their processing capabilities. Our scheme has been compared to recent algorithms that use the particle swarm optimization and the Honey bee foraging scheme to achieve load balancing. Our experimental results show that our proposed scheme outperforms other state of the art schemes in terms of makespan, average response time, and resource utilization and provides lower degree of imbalance.
“…Tang et al [8] propose an approach for OpenFlow network models, which is implemented on a time slot basis. Chen et al [31] consider the problem of LB in a multi-objective framework, where initially the problem of resource allocation for emergent demands is resolved. In [32] the authors present a loadbalancing framework with the objective of minimizing the operational cost of data centers using a genetic algorithm for resource allocation.…”
Load balancing techniques in cloud computing can be applied at three different levels: Virtual machine load balancing, task load balancing, and resource load balancing. At all levels, load balancing should also be implemented in an efficient manner, to increase system performance. In this paper, we propose a fair, in terms of added workload per VM, task load balancing strategy, that aims to improve the average response time and the makespan of the system in the cloud environment. The problem is formulated as an irreducible finite state Markov process, which is known to have a balance equation for each state. From the balance state probabilities we derive the expected utilizations for the virtual machines (VM), which play a vital role in our task allocation approach. In our model, the Load Balancer (LBer) acts as a central server, which uses our proposed fair task allocation scheme to distribute the incoming tasks in a fair, balanced manner among the virtual machines, taking into account their current state as well as their processing capabilities. Our scheme has been compared to recent algorithms that use the particle swarm optimization and the Honey bee foraging scheme to achieve load balancing. Our experimental results show that our proposed scheme outperforms other state of the art schemes in terms of makespan, average response time, and resource utilization and provides lower degree of imbalance.
“…In [14], a search engine activator was invented to enhance the particle swarm optimization technique, which can dramatically reduce average waiting time. [15] demonstrated an evolutionary computing algorithm that uses the ant colony application's superior feedback process to handle the issue of virtual computer burden in the assignment scheduling phase, resulting in a greater resource capacity factor. [16] enhanced the evolutionary algorithm by taking into consideration virtual server computing capacity, network connectivity, and other considerations to optimize load balancing and task computational time.…”
Using enhanced ant colony optimization, this study proposes an efficient heuristic scheduling technique for cloud infrastructure that addresses the issues with nonlinear loads, slow processing complexity, and incomplete shared memory asset knowledge that plagued earlier resource supply implementations. The cloud-based planning architecture has been tailored for dynamic planning. Therefore, to determine the best task allocation method, a contentment factor was developed by integrating these three objectives of the smallest waiting period, the extent of commodity congestion control, and the expense of goal accomplishment. Ultimately, the incentive and retribution component would be used to modify the ant colony calculation perfume-generating criteria that accelerate a solution time. In particular, they leverage an activity contributed of the instability component to enhance the capabilities of such a method, and they include a virtual desktop burden weight component in the operation of regional pheromone revamping to assure virtual computers’ immense. Experiences with the routing protocol should be used to explore or demonstrate the feasibility of our methodology. In comparison with traditional methods, the simulation results show that the proposed methodology has the most rapid generalization capability, and it has the shortest duration of the project, the most distributed demand, and the best utilization of the capabilities of the virtual computer. Consequently, their hypothetical technique of optimizing the supply of resources exceeds world competition.
“…Before the agent starts the learning process, the Q table is first initialized, then in each episode, the algorithm starts from the initial state. At each state, the agent selects an action based on the ε − greedy strategy, obtains the reward value of the state-action pair, and the environment transitions to the next state, and then the Q value is updated according to (8). After the end of this state, the next state is regarded as the current state, and the operation of "action selection" is repeated until the current state is terminated, then this episode terminates.…”
The rapid growth of the number of devices in the industrial Internet of things (IIoT) has a huge influence in the amount of data involved. In order to alleviate the computing load of cloud servers and reduce the delay of data processing, edge-cloud computing cooperation has been introduced to the IIoT. General programmable logic controllers (PLCs), which have been playing important roles in industrial control systems, start to gain the ability of processing large amount of industrial data and sharing the workload of cloud servers. This transforms them into edge-PLCs. However, continuous influx of multiple types of concurrent production data stream against the limited capacity of built-in memory in PLCs bring a huge challenge. Therefore, ability to reasonably allocate memory resources in edge-PLCs to ensure data utilization and real-time processing has become one of the core means in improving the efficiency of industrial processes. In this paper, to tackle dynamic changes of arrival data rate over time at each edge-PLC, we propose to optimize memory allocation with Q-learning distributedly. The simulation experiments verify that the method can effectively reduce the data loss probability while improve the system performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.