Excerpted from "Rethinking Energy-Performance Trade-Off in Mobile Web Page Loading," from Proceedings of the 21st Annual International Conference on Mobile Computing and Networking with permission. http://dl.acm.org/citation.cfm?id=2790103 © ACM 2015. Web browsers are one of the core applications on smartphones and other mobile devices, such as tablets. However, web browsing, particularly web page loading, is of high energy consumption as mobile browsers are largely optimized for performance and thus impose a significant burden on power-hungry mobile devices. With the advent of modern web capabilities, websites even become more complex and energy demanding. In the meantime, slow progress in battery technology constrains battery budget for mobile devices. As users are more aware of energy consumption of apps, it is desirable to improve the energy efficiency of web browsing, particularly web page loading.
The goal of this work is to provide an abstraction of ideal sound environments to a new emerging class of Mobile Multispeaker Audio (MMA) applications. Typically, it is challenging for MMA applications to implement advanced sound features (e.g., surround sound) accurately in mobile environments, especially due to unknown, irregular loudspeaker configurations. Towards an illusion that MMA applications run over specific loudspeaker configurations (i.e., speaker type, layout), this work proposes AMAC, a new Adaptive Mobile Audio Coordination system that senses the acoustic characteristics of mobile environments and controls individual loudspeakers adaptively and accurately. The prototype of AMAC implemented on commodity smartphones shows that it provides the coordination accuracy in sound arrival time in several tens of microseconds and reduces the variance in sound level substantially.
A variety of advantages from sounds such as measurement and accessibility introduces a new opportunity for mobile applications to offer broad types of interesting, valuable functionalities, supporting a richer user experience. However, in spite of the growing interests on mobile sound applications, few or no works have been done in focusing on managing an audio device effectively. More specifically, their low level of real-time capability for audio resources makes it challenging to satisfy tight timing requirements of mobile sound applications, e.g., a high sensing rate of acoustic sensing applications. To address this problem, this work presents the SounDroid framework, an audio device management framework for real-time audio requests from mobile sound applications. The design of SounDroid is based on the requirement analysis of audio requests as well as an understanding of the audio playback procedure including the audio request scheduling and dispatching on Android. It then incorporates both real-time audio request scheduling algorithms, called EDF-V and AFDS, and dispatching optimization techniques into mobile platforms, and thus improves the quality-ofservice of mobile sound applications. Our experimental results with the prototype implementation of SounDroid demonstrate that it is able to enhance scheduling performance for audio requests, compared to traditional mechanisms (by up to 40% improvement), while allowing deterministic dispatching latency.
Samsung Pay, one of the most representative mobile payment services, allows mobile users to make payment transactions almost anywhere using only their smartphone. This is thanks to MST (Magnetic Secure Transmission) that supports communication between smartphones and payment terminals for magnetic cards by transferring payment tokens via magnetic waves. Several attack methods have targeted this new technology by eavesdropping on magnetic fields to intercept the tokens, but with the use of dedicated hardware. This paper raises new security concerns for mobile payment users in a different, yet more effective way; by introducing MagSnoop, a novel framework that infers payment tokens from listening to MST sounds generated during the activation of MST payment transactions. More specifically, we first explore the principle, causing the generation of MST sounds, and the fundamental characteristics of these sounds. We then use these observations to infer payment tokens with a high degree of accuracy, robustness, applicability, and data efficiency. Our experiments with a prototype of MagSnoop demonstrate that it can support high accuracy in token inference (more than 77.8%). In addition, MagSnoop can maintain a reasonable level of accuracy regardless of the payment environments (e.g., 69.2% with a noise level of 50 dBA) and even in the real world (an inference success rate of 68.0% with 15 real-world users). CCS CONCEPTS• Security and privacy → Mobile and wireless security; Sidechannel analysis and countermeasures; • Human-centered computing → Smartphones.
We propose a novel tapstroke inference attack method, called TapSnoop, that precisely recovers what user types on touchscreen devices. Inferring tapstrokes is challenging owing to 1) low tapstroke intensity and 2) dynamically-changing noise. We address these challenges by revealing the unique characteristics of tapstrokes from audio recordings exploited by TapSnoop as a side channel of tapstrokes. In particular, we develop tapstroke detection and localization algorithms that collectively leverage audio features obtained from multiple microphones, which are designed to reflect the core properties of tapstrokes. Furthermore, we improve its robustness against environmental changes, by developing environment-adaptive classification and noise subtraction algorithms. Extensive experiments with ten real-world users on both number and QWERTY keyboards show that TapSnoop can achieve an inference accuracy of 85.4% and 75.6% (96.2% and 90.8% in best case scenarios) in stable environments, respectively. TapSnoop can also achieve a reasonable accuracy even with varying noise. For example, it shows an inference accuracy of 84.8% and 72.7% in a numeric keyboard when the noise level is varied from 37.9 to 51.2 dBA and 46.7 to 60.0 dBA, respectively. INDEX TERMS Acoustic signal processing, acoustic sensors, mobile computing, privacy, side-channel attack, tapstroke inference.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.