Abstract:In order to solve the shortcomings of particle swarm optimization (PSO) in solving multiobjective optimization problems, an improved multiobjective particle swarm optimization (IMOPSO) algorithm is proposed. In this study, the competitive strategy was introduced into the construction process of Pareto external archives to speed up the search process of nondominated solutions, thereby increasing the speed of the establishment of Pareto external archives. In addition, the descending order of crowding distance me… Show more
“…In this section, we conduct experiments to evaluate our proposed algorithm MGWO compared to the Cloud-fog cooperation algorithm [42], NSGA-II, and MPSO algorithms regarding the objective's functions delay and energy consumption. In an Edge-Cloud environment, various IoT/mobile devices generate several applications.…”
Section: Simulation and Resultsmentioning
confidence: 99%
“…It also raises diversity in the solution selection, which prevents local optimality. The strategy of crowding distance is limiting the archive size, which solution in the archive sorting in descending order according to the crowding distance values, then determining if the solutions exceed the archive size, then deleting the non-dominated solutions beyond the size [42]. See equation ( 16)…”
The revolution of IoT and its capabilities to serve various fields led to generating a large amount of data for processing. Tasks that require an instant response, especially with sensitive delay tasks send to the fog node due to the close distance, and the complex tasks transfer to the cloud data center for its huge computation and storage. However, sending tasks to the fog decreases the transmission delay. Still, it increases the energy consumption of the end users, while transferring tasks to the cloud reduces users' energy consumption but increases the transmission delay due to the long distance; besides, assigning tasks to appropriate resources compatible with task requirements. These are the main challenges in cloudfog computing that need to improve. Thus, this study proposed a Multi-Objectives Grey Wolf Optimizer (MGWO) algorithm to reduce the QoS objectives delay and energy consumption and held in the fog broker, which plays an essential role in distributing tasks. The simulation result verifies the effectiveness of the MGWO algorithm compared to the state-of-the-art algorithms in reducing delay and Energy consumption.INDEX TERMS Cloud-fog computing, delay, energy consumption, grey wolf optimizer, Internet of Things, meta-heuristic, task scheduling.
“…In this section, we conduct experiments to evaluate our proposed algorithm MGWO compared to the Cloud-fog cooperation algorithm [42], NSGA-II, and MPSO algorithms regarding the objective's functions delay and energy consumption. In an Edge-Cloud environment, various IoT/mobile devices generate several applications.…”
Section: Simulation and Resultsmentioning
confidence: 99%
“…It also raises diversity in the solution selection, which prevents local optimality. The strategy of crowding distance is limiting the archive size, which solution in the archive sorting in descending order according to the crowding distance values, then determining if the solutions exceed the archive size, then deleting the non-dominated solutions beyond the size [42]. See equation ( 16)…”
The revolution of IoT and its capabilities to serve various fields led to generating a large amount of data for processing. Tasks that require an instant response, especially with sensitive delay tasks send to the fog node due to the close distance, and the complex tasks transfer to the cloud data center for its huge computation and storage. However, sending tasks to the fog decreases the transmission delay. Still, it increases the energy consumption of the end users, while transferring tasks to the cloud reduces users' energy consumption but increases the transmission delay due to the long distance; besides, assigning tasks to appropriate resources compatible with task requirements. These are the main challenges in cloudfog computing that need to improve. Thus, this study proposed a Multi-Objectives Grey Wolf Optimizer (MGWO) algorithm to reduce the QoS objectives delay and energy consumption and held in the fog broker, which plays an essential role in distributing tasks. The simulation result verifies the effectiveness of the MGWO algorithm compared to the state-of-the-art algorithms in reducing delay and Energy consumption.INDEX TERMS Cloud-fog computing, delay, energy consumption, grey wolf optimizer, Internet of Things, meta-heuristic, task scheduling.
“…A larger value of inertia weight indicates a greater global search ability (i.e., searching for a new area), whereas a smaller value of inertia weight indicates a greater local search area (i.e., current search area) [27]. This study adopted a new technique [28] to improve the inertial weight of the algorithm as follows…”
Section: A the Velocity Of Each Particle Is Updated As Followsmentioning
The Internet of Things (IoT) generates massive data from smart devices that demand responses from cloud servers. However, sending tasks to the cloud reduces the power consumed by the users' devices, but increases the transmission delay of the tasks. In contrast, sending tasks to the fog server reduces the transmission delay due to the shorter distance between the user and the server. However, this occurs at the user end's expense of higher energy consumption. Thus, this study proposes a mathematical framework for workload allocation to model the power consumption and delay functions for both fog and clouds. After that, a Modified Least Laxity First (MLLF) algorithm was proposed to reduce the maximum delay threshold. Furthermore, a new multi-objective approach, namely the Non-dominated Particle Swarm Optimization (NPSO), is proposed to reduce energy consumption and delay compared to the state-of-theart algorithms. The simulation results show that NPSO outperforms the state-of-the-art algorithm in reducing energy consumption, while NGSA-II proves its effectiveness in reducing transmission delay compared to the other algorithms in the experimental simulation. In addition, the MLLF algorithm reduces the maximum delay threshold by approximately 11% compared with other related algorithms. Moreover, the results prove that metaheuristics are more appropriate for distributed computing.
“…Gaussian variation [27] comes from the normal distribution of continuous probability distribution, which has good local development ability. The variation formula is…”
In order to balance the overall energy consumption and improve the energy efficiency of wireless sensor network (WSN), a distributed energy-balanced unequal clustering routing protocol based on the improved sine cosine algorithm (DUCISCA) is proposed. Firstly, DUCISCA adopts a time-based cluster head competition algorithm. In this algorithm, the broadcast time depends on the residual energy of the candidate cluster head, the distance to the base station, and the number of neighbour nodes. Secondly, a competition radius considering the distance from node to base station and the residual energy of node is proposed. It can balance energy consumption of nodes in different locations to avoid the “hot spot” problem. At the same time, it adopts a time-based broadcast mechanism. The waiting time depends on the residual energy of CCHs, the distance to the BS, and the number of neighbour nodes, which can effectively reduce the overhead of nodes. Thirdly, the energy of cluster head, the number of neighbour nodes, and the distance from the ordinary node to the cluster heads need to be taken into account to get a better clustering result. Finally, in order to speed up convergence and improve the ability to jump out of local optimum, the improved sine-cosine algorithm (ISCA) based on Latin hypercube sampling and adaptive mutation is proposed. The improvement strategies adopted by ISCA are expressed as follows: Firstly, the diversity of the population is enhanced through LHS population initialization. Secondly, the adaptive weight strategy is introduced to accelerate the convergence speed of the algorithm. Finally, the population is disturbed by Gaussian mutation or Levy flight to jump out of the local optimum. The standard deviation of cluster heads’ residual energy in intercluster communication is taken as the objective function to search the energy-balanced intercluster data forwarding path based on ISCA. Compared with EEUC, DEBUC, I-EEUC, and M-DEBUC, the simulation results prove that DUCISCA can effectively balance the overall network energy consumption and prolong the network lifetime.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.