The smart grid is widely considered to be the informationization of the power grid. As an essential characteristic of the smart grid, demand response can reschedule the users' energy consumption to reduce the operating expense from expensive generators and further to defer the capacity addition in the long run. This survey comprehensively explores the four major aspects -programs, issues, approaches, and future extensions -of demand response. Specifically, we first introduce the means/tariffs that the power utility takes to incentivize users to reschedule their energy usage patterns. Then we survey the existing mathematical models and problems in the previous and current literatures, followed by the state-of-the-art approaches and solutions to address these issues. Finally, based on the above overview, we also outline the potential challenges and future research directions in the context of demand response.
Future generation wireless networks, i.e., 5G and beyond, have to accommodate the surging growth of mobile data traffic and to support a high density of mobile users with a variety of services and applications. Meanwhile, the networks become increasingly dense, heterogeneous, decentralized, and ad hoc in nature, involving numerous and diverse network entities. As such, different objectives, e.g., high throughput and low latency, need to be achieved in the service and resource allocation has to be designed and optimized accordingly. However, with the dynamics and uncertainty inherently existing in the wireless network environments, conventional approaches of service and resource management that require complete and perfect knowledge of the systems become inefficient or even inapplicable. Inspired by the success of machine learning in solving complicated control and decision-making problems, in this article, we focus on deep reinforcement learning based approaches, which allow network entities to learn and build knowledge about the networks to make optimal decisions locally and independently. We first present an overview and fundamental concepts of deep reinforcement learning. Next, we review some related works that capitalize deep reinforcement learning to address different issues in 5G networks.Finally, we present an application of deep reinforcement learning in 5G network slicing optimization.The numerical results demonstrate that the proposed approach achieves superior performance compared with baseline solutions.
Fog computing, characterized by extending cloud computing to the edge of the network, has recently received considerable attention. The fog is not a substitute but a powerful complement to the cloud. It is worthy of studying the interplay and cooperation between the edge (fog) and the core (cloud). To address this issue, we study the tradeoff between power consumption and delay in a cloud-fog computing system. Specifically, we first mathematically formulate the workload allocation problem. After that, we develop an approximate solution to decompose the primal problem into three subproblems of corresponding subsystems, which can be independently solved. Finally, based on extensive simulations and numerical results, we show that by sacrificing modest computation resources to save communication bandwidth and reduce transmission latency, fog computing can significantly improve the performance of cloud computing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.