2017
DOI: 10.1109/twc.2017.2674665
|View full text |Cite
|
Sign up to set email alerts
|

Live Prefetching for Mobile Computation Offloading

Abstract: The conventional designs of mobile computation offloading fetch user-specific data to the cloud prior to computing, called offline prefetching. However, this approach can potentially result in excessive fetching of large volumes of data and cause heavy loads on radio-access networks. To solve this problem, the novel technique of live prefetching is proposed in this paper that seamlessly integrates the task-level computation prediction and prefetching within the cloud-computing process of a large program with n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
44
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 68 publications
(44 citation statements)
references
References 23 publications
0
44
0
Order By: Relevance
“…The use of caching to dynamically store the program and/or task data at the MEC system has been recently recognized as a cost-effective method to reduce computation delay, energy consumption, and bandwidth cost [3], [10]. Here, we refer to the techniques to cache the input and/or output of computation tasks at the server/user side as computation content caching (such as in [3]- [7]). On one hand, content caching reduces the data exchange between the edge servers and MUs if the required input data can be found in the cache.…”
Section: A Motivations and Summary Of Contributionsmentioning
confidence: 99%
“…The use of caching to dynamically store the program and/or task data at the MEC system has been recently recognized as a cost-effective method to reduce computation delay, energy consumption, and bandwidth cost [3], [10]. Here, we refer to the techniques to cache the input and/or output of computation tasks at the server/user side as computation content caching (such as in [3]- [7]). On one hand, content caching reduces the data exchange between the edge servers and MUs if the required input data can be found in the cache.…”
Section: A Motivations and Summary Of Contributionsmentioning
confidence: 99%
“…For the case of offloading (t k,n > 0), let g k denote the channel power gain between mobile k and the BS, which is assumed to be constant during the computation offloading for each mobile. Based on a widely-used empirical model in [4], [6], [31], [32], the transmission power, denoted by p t,n , can be modeled by a monomial function with respect to the achievable transmission rate (in bits/s) r k,n = k,n /t k,n :…”
Section: ) Localmentioning
confidence: 99%
“…2 gives the normalized signal power per symbol versus the rate, where the monomial order of (m = 3) can fairly approximate the transmission power. 6 Thus, the offloading energy consumption can be modeled by the following monomial function with respect to k,n and t k,n :…”
Section: ) Localmentioning
confidence: 99%
“…Allocating more computing resources in the network and supplying user devices with appropriate communications technology to improve the energy efficiency via offloaded computation pose new challenges. These include the need for reliable wireless connectivity and failure-free remote processing of information, which could be resolved with the advent of communication and computing cooperation (3C) techniques [14,15].…”
Section: Introductionmentioning
confidence: 99%