Today, mobile devices like smartphones and tablets have become an indispensable part of people's lives, posing many new questions e.g., in terms of interaction methods, but also security. In this paper, we conduct a large scale, long term analysis of mobile device usage characteristics like session length, interaction frequency, and daily usage in locked and unlocked state with respect to location context and diurnal pattern. Based on detailed logs from 29,279 mobile phones and tablets representing a total of 5,811 years of usage time, we identify and analyze 52.2 million usage sessions with some participants providing data for more than four years. Our results show that context has a highly significant effect on both frequency and extent of mobile device usage, with mobile phones being used twice as much at home compared to in the office. Interestingly, devices are unlocked for only 46 % of the interactions. We found that with an average of 60 interactions per day, smartphones are used almost thrice as often as tablet devices (23), while usage sessions on tablets are three times longer, hence are used almost for an equal amount of time throughout the day. We conclude that usage session characteristics differ considerably between tablets and smartphones. These results inform future approaches to mobile interaction as well as security.
We analyze locked and unlocked mobile device usage of 1 960 Android smartphones. Based on approximately 10 TB of mobile device data logs collected by the Device Analyzer project, we derive 6.9 million usage sessions using a screen power state machine based approach. From these session we examine the number of interactions per day, the average interaction duration as well as the total daily device usage time. Findings indicate that on average users interact with their devices 117 minutes a day, separated over 57 interactions -while unlocking their device only 43% of the time (e. g. to check for notifications).
As users start carrying multiple mobile devices, we propose a novel, token based mobile device unlocking approach. Mobile devices are conjointly shaken to transfer the authentication state from an unlocked token device to another device to unlock it. A common use case features a wrist watch as token device, which remains unlocked as long as it is strapped to the user's wrist, and a locked mobile phone, which is unlocked if both devices are shaken conjointly. Shaking can be done single-handedly, requires little user attention (users don't have to look at the device for unlocking it) and does not cause additional cognitive load on users. In case attackers gain control over the locked phone, forging shaking is difficult, which impedes malicious unlocks. We evaluate our approach using acceleration records from our 29 people sized ShakeUnlock database and discuss influence of its constituent parts on the system performance. We further present a performance study using an Android implementation and live data, which shows the true negative rate of observational attacks to be in the range of 0.8 -if an attacker manages to gain control over the locked device and shake it in parallel to the device owner shaking the token device.
Sports and workout activities have become important parts of modern life. Nowadays, many people track characteristics about their sport activities with their mobile devices, which feature inertial measurement unit (IMU) sensors. In this paper we present a methodology to detect and recognize workout, as well as to count repetitions done in a recognized type of workout, from a single 3D accelerometer worn at the chest. We consider four different types of workout (pushups, situps, squats and jumping jacks). Our technical approach to workout type recognition and repetition counting is based on machine learning with a convolutional neural network. Our evaluation utilizes data of 10 subjects, which wear a Movesense sensors on their chest during their workout. We thereby find that workouts are recognized correctly on average 89.9% of the time, and the workout repetition counting yields an average detection accuracy of 97.9% over all types of workout.
Biometrics have become important for mobile authentication, e.g. to unlock devices before using them. One way to protect biometric information stored on mobile devices from disclosure is using embedded smart cards (SCs) with biometric match-on-card (MOC) approaches. However, computational restrictions of SCs also limit biometric matching procedures. We present a mobile MOC approach that uses offline training to obtain authentication models with a simplistic internal representation in the final trained state, wherefore we adapt features and model representation to enable their usage on SCs. The pre-trained model can be shipped with SCs on mobile devices without requiring retraining to enroll users. We apply our approach to acceleration based mobile gait authentication as well as face authentication and compare authentication accuracy and computation time of 16 and 32 bit Java Card SCs. Using 16 instead of 32 bit SCs has little impact on authentication performance and is faster due to less data transfer and computations on the SC. Results indicate 11.4% and 2.4-5.4% EER for gait respectively face authentication, with transmission and computation durations on SCs in the range of 2 s respectively 1 s. To the best of our knowledge this work represents the first practical approach towards acceleration based gait MOC authentication.
Biometric authentication, namely using biometric features for authentication is gaining popularity in recent years as further modalities, such as fingerprint, iris, face, voice, gait, and others are exploited. We explore the effectiveness of three simple Electroencephalography (EEG) related biometric authentication tasks, namely resting, thinking about a picture, and moving a single finger. We present details of the data processing steps we exploit for authentication, including extracting features from the frequency power spectrum and MFCC, and training a multilayer perceptron classifier for authentication. For evaluation purposes, we record an EEG dataset of 27 test subjects. We use three setups, baseline, task-agnostic, and task-specific, to investigate whether person-specific features can be detected across different tasks for authentication. We further evaluate, whether different tasks can be distinguished. Our results suggest that tasks are distinguishable, as well as that our authentication approach can work both exploiting features from a specific, fixed, task as well as using features across different tasks.
Gaze gestures bear potential for user input with mobile devices, especially smart glasses, due to being always available and handsfree. So far, gaze gesture recognition approaches have utilized open-eye movements only and disregarded closed-eye movements. This paper is a first investigation of the feasibility of detecting and recognizing closedeye gaze gestures from close-up optical sources, e.g. eye-facing cameras embedded in smart glasses. We propose four different closed-eye gaze gesture protocols, which extend the alphabet of existing open-eye gaze gesture approaches. We further propose a methodology for detecting and extracting the corresponding closed-eye movements with full optical flow, time series processing, and machine learning. In the evaluation of the four protocols we find closed-eye gaze gestures to be detected 82.8%-91.6% of the time, and extracted gestures to be recognized correctly with an accuracy of 92.9%-99.2%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.