A growing number of sensors on smart mobile devices has led to rapid development of various mobile applications using location-based or context-aware services. Typically, outdoor localization techniques have relied on GPS, or on cellular infrastructure support. While GPS gives high positioning accuracy, it can quickly deplete the battery on the device. On the other hand, base station based localization has low accuracy. In search of alternative techniques for outdoor localization, several approaches have explored the use of data gathered from other available sensors, like accelerometer, microphone, compass, and even daily patterns of usage, to identify unique signatures that can locate a device. Signatures, or fingerprints of an area, are hidden cues existing around a user's environment. However, under different operating scenarios, fingerprint-based localization techniques have variable performance in terms of accuracy, latency of detection, battery usage. The main contribution of this survey is to present a classification of existing fingerprintbased localization approaches which intelligently sense and match different clues from the environment for location identification. We describe how each fingerprinting technique works, followed by a review of the merits and demerits of the systems built based on these techniques. We conclude by identifying several improvements and application domain for fingerprinting based localization.
There are different modes of interaction with a software keyboard on a smartphone, such as typing and swyping. Patterns of such touch interactions on a keyboard may reflect emotions of a user. Since users may switch between different touch modalities while using a keyboard, therefore, automatic detection of emotion from touch patterns must consider both modalities in combination to detect the pattern. In this paper, we focus on identifying different features of touch interactions with a smartphone keyboard that lead to a personalized model for inferring user emotion. Since distinguishing typing and swyping activity is important to record the correct features, we designed a technique to correctly identify the modality. The ground truth labels for user emotion are collected directly from the user by periodically collecting self-reports. We jointly model typing and swyping features and correlate them with user provided self-reports to build a personalized machine learning model, which detects four emotion states (happy, sad, stressed, relaxed). We combine these design choices into an Android application TouchSense and evaluate the same in a 3-week in-the-wild study involving 22 participants. Our key evaluation results and post-study participant assessment demonstrate that it is possible to predict these emotion states with an average accuracy (AUCROC) of 73% (std dev. 6%, maximum 87%) combining these two touch interactions only.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.