Mobile-edge computation offloading (MECO) offloads intensive mobile computation to clouds located at the edges of cellular networks. Thereby, MECO is envisioned as a promising technique for prolonging the battery lives and enhancing the computation capacities of mobiles. In this paper, we study resource allocation for a multiuser MECO system based on time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA). First, for the TDMA MECO system with infinite or finite cloud computation capacity, the optimal resource allocation is formulated as a convex optimization problem for minimizing the weighted sum mobile energy consumption under the constraint on computation latency. The optimal policy is proved to have a threshold-based structure with respect to a derived offloading priority function, which yields priorities for users according to their channel gains and local computing energy consumption. As a result, users with priorities above and below a given threshold perform complete and minimum offloading, respectively. Moreover, for the cloud with finite capacity, a sub-optimal resource-allocation algorithm is proposed to reduce the computation complexity for computing the threshold. Next, we consider the OFDMA MECO system, for which the optimal resource allocation is formulated as a mixed-integer problem. To solve this challenging problem and characterize its policy structure, a low-complexity sub-optimal algorithm is proposed by transforming the OFDMA problem to its TDMA counterpart. The corresponding resource allocation is derived by defining an average offloading priority function and shown to have close-to-optimal performance in simulation.
Achieving long battery lives or even self sustainability has been a long standing challenge for designing mobile devices. This paper presents a novel solution that seamlessly integrates two technologies, mobile cloud computing and microwave power transfer (MPT), to enable computation in passive lowcomplexity devices such as sensors and wearable computing devices. Specifically, considering a single-user system, a base station (BS) either transfers power to or offloads computation from a mobile to the cloud; the mobile uses harvested energy to compute given data either locally or by offloading. A framework for energy efficient computing is proposed that comprises a set of policies for controlling CPU cycles for the mode of local computing, time division between MPT and offloading for the other mode of offloading, and mode selection. Given the CPU-cycle statistics information and channel state information (CSI), the policies aim at maximizing the probability of successfully computing given data, called computing probability, under the energy harvesting and deadline constraints. The policy optimization is translated into the equivalent problems of minimizing the mobile energy consumption for local computing and maximizing the mobile energy savings for offloading which are solved using convex optimization theory. The structures of the resultant policies are characterized in closed form. Furthermore, given non-causal CSI, the said analytical framework is further developed to support computation load allocation over multiple channel realizations, which further increases the computing probability. Last, simulation demonstrates the feasibility of wirelessly powered mobile cloud computing and the gain of its optimal control.
Interference alignment (IA) is a joint-transmission technique that achieves the maximum degrees-offreedom (DoF) of the interference channel, which provides linear scaling of the capacity with the number of users for high signal-to-noise ratios (SNRs). Most prior work on IA is based on the impractical assumption that perfect and global channel-state information (CSI) is available at all transmitters. To implement IA, each receiver has to feed back CSI to all interferers, resulting in overwhelming feedback overhead. In particular, the sum feedback rate of each receiver scales quadratically with the number of users even if the quantized CSI is fed back. To substantially suppress feedback overhead, this paper focuses on designing efficient arrangements of feedback links, called feedback topologies, under the IA constraint. For the multiple-input-multiple-output (MIMO) K-user interference channel, we propose the feedback topology that supports sequential CSI exchange (feedback and feedforward) between transmitters and receivers so as to achieve IA progressively. This feedback topology is shown to reduce the network feedback overhead from a quadratic function of K to a linear one. To reduce the delay in the sequential CSI exchange, an alternative feedback topology is designed for supporting two-hop feedback via a control station, which also achieves the linear feedback scaling with K. Next, given the proposed feedback topologies, the feedback-bit allocation algorithm is designed for allocating feedback bits by each receiver to different feedback links so as to regulate the residual interference caused by the finite-rate feedback. Simulation results demonstrate that the proposed bit allocation leads to significant throughput gains especially in strong interference environments.
The conventional designs of mobile computation offloading fetch user-specific data to the cloud prior to computing, called offline prefetching. However, this approach can potentially result in excessive fetching of large volumes of data and cause heavy loads on radio-access networks. To solve this problem, the novel technique of live prefetching is proposed in this paper that seamlessly integrates the task-level computation prediction and prefetching within the cloud-computing process of a large program with numerous tasks. The technique avoids excessive fetching but retains the feature of leveraging prediction to reduce the program runtime and mobile transmission energy. By modeling the tasks in an offloaded program as a stochastic sequence, stochastic optimization is applied to design fetching policies to minimize mobile energy consumption under a deadline constraint. The policies enable real-time control of the prefetcheddata sizes of candidates for future tasks. For slow fading, the optimal policy is derived and shown to have a threshold-based structure, selecting candidate tasks for prefetching and controlling their prefetched data based on their likelihoods. The result is extended to design close-to-optimal prefetching policies to fast fading channels. Compared with fetching without prediction, live prefetching is shown theoretically to always achieve reduction on mobile energy consumption.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.