Reducing network latency in mobile applications is an effective way of improving the mobile user experience and has tangible economic benefits. This paper presents PALOMA, a novel clientcentric technique for reducing the network latency by prefetching HTTP requests in Android apps. Our work leverages string analysis and callback control-flow analysis to automatically instrument apps using PALOMA's rigorous formulation of scenarios that address "what" and "when" to prefetch. PALOMA has been shown to incur significant runtime savings (several hundred milliseconds per prefetchable HTTP request), both when applied on a reusable evaluation benchmark we have developed and on real applications.
INTRODUCTIONIn mobile computing, user-perceived latency is a critical concern as it directly impacts user experience and often has severe economic consequences. A recent report shows that a majority of mobile users would abandon a transaction or even delete an app if the response time of a transaction exceeds three seconds [6]. Google estimates that an additional 500ms delay per transaction would result in up to 20% loss of traffic, while Amazon estimates that every 100ms delay would cause 1% annual sales loss [42]. A previous study showed that network transfer is often the performance bottleneck, and mobile apps spend 34-85% of their time fetching data from the Internet [32]. A compounding factor is that mobile devices rely on wireless networks, which can exhibit high latency, intermittent connectivity, and low bandwidth [21].Reducing network latency thus becomes a highly effective way of improving the mobile user experience. In the context of mobile communication, we define latency as the response time of an HTTP request. In this paper, we propose a novel client-centric technique for minimizing the network latency by prefetching HTTP requests in mobile apps. Prefetching bypasses the performance bottleneck (in this case, network speed) and masks latency by allowing a response to a request to be generated immediately, from a local cache.