The performance of wireless communication is fundamentally constrained by the limited battery life of wireless devices, whose operations are frequently disrupted due to the need of manual battery replacement/recharging. The recent advance in radio frequency (RF) enabled wireless energy transfer (WET) technology provides an attractive solution named wireless powered communication (WPC), where the wireless devices are powered by dedicated wireless power transmitters to provide continuous and stable microwave energy over the air. As a key enabling technology for truly perpetual communications, WPC opens up the potential to build a network with larger throughput, higher robustness, and increased flexibility compared to its battery-powered counterpart. However, the combination of wireless energy and information transmissions also raises many new research problems and implementation issues to be addressed. In this article, we provide an overview of state-of-the-art RF-enabled WET technologies and their applications to wireless communications, with highlights on the key design challenges, solutions, and opportunities ahead.
Finite battery lifetime and low computing capability of size-constrained wireless devices (WDs) have been longstanding performance limitations of many low-power wireless networks, e.g., wireless sensor networks (WSNs) and Internet of Things (IoT). The recent development of radio frequency (RF) based wireless power transfer (WPT) and mobile edge computing (MEC) technologies provide promising solutions to fully remove these limitations so as to achieve sustainable device operation and enhanced computational capability. In this paper, we consider a multi-user MEC network powered by WPT, where each energy-harvesting WD follows a binary computation offloading policy, i.e., data set of a task has to be executed as a whole either locally or remotely at the MEC server via task offloading. In particular, we are interested in maximizing the (weighted) sum computation rate of all the WDs in the network by jointly optimizing the individual computing mode selection (i.e., local computing or offloading) and the system transmission time allocation (on WPT and task offloading). The major difficulty lies in the combinatorial nature of multi-user computing mode selection and its strong coupling with transmission time allocation. To tackle this problem, we first consider a decoupled optimization, where we assume that the mode selection is given and propose a simple bi-section search algorithm to obtain the conditional optimal time allocation. On top of that, a coordinate descent method is devised to optimize the mode selection. The method is simple in implementation but may suffer from high computational complexity in a large-size network. To address this problem, we further propose a joint optimization method based on the ADMM (alternating direction method of multipliers) decomposition technique, which enjoys much slower increase of computational complexity as the networks size increases. Extensive simulations show that both the proposed methods can efficiently achieve near-optimal performance under various network setups, and significantly outperform the other representative benchmark methods considered. Index TermsMobile edge computing, wireless power transfer, binary computation offloading, resource allocation.S. Bi (bsz@szu.edu.cn) is with the College
Wireless powered mobile-edge computing (MEC) has recently emerged as a promising paradigm to enhance the data processing capability of low-power networks, such as wireless sensor networks and internet of things (IoT). In this paper, we consider a wireless powered MEC network that adopts a binary offloading policy, so that each computation task of wireless devices (WDs) is either executed locally or fully offloaded to an MEC server. Our goal is to acquire an online algorithm that optimally adapts task offloading decisions and wireless resource allocations to the time-varying wireless channel conditions. This requires quickly solving hard combinatorial optimization problems within the channel coherence time, which is hardly achievable with conventional numerical optimization methods. To tackle this problem, we propose a Deep Reinforcement learning-based Online Offloading (DROO) framework that implements a deep neural network as a scalable solution that learns the binary offloading decisions from the experience. It eliminates the need of solving combinatorial optimization problems, and thus greatly reduces the computational complexity especially in large-size networks. To further reduce the complexity, we propose an adaptive procedure that automatically adjusts the parameters of the DROO algorithm on the fly. Numerical results show that the proposed algorithm can achieve near-optimal performance while significantly decreasing the computation time by more than an order of magnitude compared with existing optimization methods. For example, the CPU execution latency of DROO is less than 0.1 second in a 30-user network, making real-time and optimal offloading truly viable even in a fast fading environment.Index Terms-Mobile-edge computing, wireless power transfer, reinforcement learning, resource allocation. ! • L. Huang is with the College
The rapidly growing wave of wireless data service is pushing against the boundary of our communication network's processing power. The pervasive and exponentially increasing data traffic present imminent challenges to all the aspects of the wireless system design, such as spectrum efficiency, computing capabilities and fronthaul/backhaul link capacity. In this article, we discuss the challenges and opportunities in the design of scalable wireless systems to embrace such a "bigdata" era. On one hand, we review the state-of-the-art networking architectures and signal processing techniques adaptable for managing the bigdata traffic in wireless networks. On the other hand, instead of viewing mobile bigdata as a unwanted burden, we introduce methods to capitalize from the vast data traffic, for building a bigdata-aware wireless network with better wireless service quality and new mobile applications. We highlight several promising future research directions for wireless communications in the mobile bigdata era.
The performance of cloud radio access network (C-RAN) is constrained by the limited fronthaul link capacity under future heavy data traffic. To tackle this problem, extensive efforts have been devoted to design efficient signal quantization/compression techniques in the fronthaul to maximize the network throughput. However, most of the previous results are based on information-theoretical quantization methods, which are hard to implement practically due to the high complexity. In this paper, we propose using practical uniform scalar quantization in the uplink communication of an orthogonal frequency division multiple access (OFDMA) based C-RAN system, where the mobile users are assigned with orthogonal sub-carriers for transmission. In particular, we study the joint wireless power control and fronthaul quantization design over the sub-carriers to maximize the system throughput. Efficient algorithms are proposed to solve the joint optimization problem when either information-theoretical or practical fronthaul quantization method is applied. We show that the fronthaul capacity constraints have significant impact to the optimal wireless power control policy. As a result, the joint optimization shows significant performance gain compared with optimizing only wireless power control or fronthaul quantization. Besides, we also show that the proposed simple uniform quantization scheme performs very close to the throughput performance upper bound, and in fact overlaps with the upper bound when the fronthaul capacity is sufficiently large. Overall, our results reveal practically achievable throughput performance of C-RAN for its efficient deployment in the next-generation wireless communication systems. Index TermsCloud radio access network (C-RAN), fronthaul constraint, quantize-and-forward, orthogonal frequency division multiple access (OFDMA), power control, throughput maximization.
The normal operation of power system relies on accurate state estimation that faithfully reflects the physical aspects of the electrical power grids. However, recent research shows that carefully synthesized false-data injection attacks can bypass the security system and introduce arbitrary errors to state estimates. In this paper, we use graphical methods to study defending mechanisms against false-data injection attacks on power system state estimation. By securing carefully selected meter measurements, no false data injection attack can be launched to compromise any set of state variables. We characterize the optimal protection problem, which protects the state variables with minimum number of measurements, as a variant Steiner tree problem in a graph. Based on the graphical characterization, we propose both exact and reduced-complexity approximation algorithms. In particular, we show that the proposed treepruning based approximation algorithm significantly reduces computational complexity, while yielding negligible performance degradation compared with the optimal algorithms. The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system testcases.Index Terms-False-data injection attack, power system state estimation, smart grid security, graph algorithms.
The large-scale integration of plug-in electric vehicles (PEVs) to the power grid spurs the need for efficient charging coordination mechanisms. It can be shown that the optimal charging schedule smooths out the energy consumption over time so as to minimize the total energy cost. In practice, however, it is hard to smooth out the energy consumption perfectly, because the future PEV charging demand is unknown at the moment when the charging rate of an existing PEV needs to be determined. In this paper, we propose an Online cooRdinated CHARging Decision (ORCHARD) algorithm, which minimizes the energy cost without knowing the future information. Through rigorous proof, we show that ORCHARD is strictly feasible in the sense that it guarantees to fulfill all charging demands before due time. Meanwhile, it achieves the best known competitive ratio of 2.39. To further reduce the computational complexity of the algorithm, we propose a novel reduced-complexity algorithm to replace the standard convex optimization techniques used in ORCHARD. Through extensive simulations, we show that the average performance gap between ORCHARD and the offline optimal solution, which utilizes the complete future information, is as small as 14%. By setting a proper speeding factor, the average performance gap can be further reduced to less than 6%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.