Computation offloading technology extends cloud computing to the edge of the access network close to users, bringing many benefits to terminal devices with limited battery and computational resources. Nevertheless, the existing computation offloading approaches are challenging to apply to specific scenarios, such as the dense distribution of end-users and the sparse distribution of network infrastructure. The technological revolution in the unmanned aerial vehicle (UAV) and chip industry has granted UAVs more computing resources and promoted the emergence of UAV-assisted mobile edge computing (MEC) technology, which could be applied to those scenarios. However, in the MEC system with multiple users and multiple servers, making reasonable offloading decisions and allocating system resources is still a severe challenge. This paper studies the offloading decision and resource allocation problem in the UAV-assisted MEC environment with multiple users and servers. To ensure the quality of service for end-users, we set the weighted total cost of delay, energy consumption, and the size of discarded tasks as our optimization objective. We further formulate the joint optimization problem as a Markov decision process and apply the soft actor–critic (SAC) deep reinforcement learning algorithm to optimize the offloading policy. Numerical simulation results show that the offloading policy optimized by our proposed SAC-based dynamic computing offloading (SACDCO) algorithm effectively reduces the delay, energy consumption, and size of discarded tasks for the UAV-assisted MEC system. Compared with the fixed local-UAV scheme in the specific simulation setting, our proposed approach reduces system delay and energy consumption by approximately 50% and 200%, respectively.
Due to resource constraints and severe conditions, wireless sensor networks should be self-adaptive to maintain certain desirable properties, such as energy efficiency and fault tolerance. In this paper, we design a practical utility function that can effectively balance transmit power, residual energy, and network connectivity, and then we investigate a topology control game model based on non-cooperative game theory. The theoretical analysis shows that the topology game model is a potential game and can converge to a state of the Nash equilibrium. Based on this model, an energy-efficient and fault-tolerant topology control game algorithm, EFTCG, is proposed to adaptively constructs a network topology. In turn, we present two subalgorithms: EFTCG-1 and EFTCG-2. The former just guarantees network single connectivity, but the latter can guarantee network biconnectivity. We evaluate the energy-efficient effect of EFTCG-1. Meanwhile, we also analyze the fault-tolerant performance of EFTCG-2. The simulation results verify the validity of the utility function. EFTCG-1 can efficiently prolong the network lifetime compared with other game-based algorithms, and EFTCG-2 performs better in robustness, although does not significantly reduce the network lifetime.
As a promising paradigm, computation offloading technology can offload computing tasks to multi-access edge computing (MEC) servers, which is an appealing choice for resource-constrained end-devices to reduce their computational effort. However, due to limited resources, one crucial research challenge for computation offloading is to design an appropriate offloading policy to determine which tasks should be offloaded in some complex circumstances. In this paper, we study the offloading decision problem in a software-defined networking (SDN) driven MEC environment with multiple users and multiple servers. To ensure that end-users do not abuse the computing resources in the MEC system, we formulate the profit of MEC servers as our optimization objective. We jointly optimize the selection of MEC servers, the size of offloading data, and the price of MEC computing service to maximize the profit of MEC servers. However, considering the dynamic and stochastic of end-users, it is challenging to obtain an optimal policy in such a MEC environment. We apply deep reinforcement learning (DRL) and Game theory to our proposed approach. Specifically, we propose a proximal policy optimization (PPO) reinforcement learning framework to tackle the selection of MEC servers. Secondly, a two-step optimization problem was formulated to determine the size of offloading data and the pricing of computing services. The optimal values of those two were determined by achieving the Nash equilibrium of the strategy game between end-users. Extensive simulation results prove that our proposal has a better performance than existing solutions in convergence time and stability.
Device to device (D2D) communication has recently been established in the literature as an effective means to increase the frequency spectrum and enhance the efficiency of energy consumption in future cellular systems. However, certain issues, resulting from reusing resources in the same cell, have caused serious perturbations. We study issues pertaining to D2D communication, such as dual mode selection, channel allocation and power control, aiming at the maximization of the overall throughput of the system, while at the same time ensuring that the generated interference is kept minimized. This is an NP-Hard problem that decomposes the optimization problem into two layers: the inner layer, where the DQN algorithm is used as an indicator of the optimal transmission power that should be allocated to the D2D pairs in accordance with their mode of operation, and the outer layer, where strategic decisions, such as which communication mode to use and how to allocate the channels, are made. We have proved the superiority of the proposed scheme, in terms of both system throughput and performance, through simulating experiments involving different scenarios.
In wireless sensor networks, nodes may adopt selfish behavior to save their energy resources, which causes energy imbalance among nodes, because of lacking a central controller with the function of making nodes cooperate. Noncooperative game is an effective tool for portraying this kind of selfish behavior. In this paper, we address the problems of transmission power minimization and energy balance using a topology control game. Firstly, we establish a topology control game model and prove that the topology game model is an ordinal potential game with Pareto optimality. Secondly, based on this model, we propose an Energy Balance Topology control Game algorithm (EBTG), in which, by taking the energy efficiency and energy balance of the nodes into account, we design an improved optimization-integrated utility function by introducing the Theil index. Finally, simulation results show that the EBTG algorithm can improve the energy balance and energy efficiency, and can prolong the network lifetime in comparison with other topology control algorithms based on game theory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.