Accelerometers and gyroscopes embedded in mobile devices have shown great potential for non-obtrusive gait biometrics by directly capturing a user's characteristic locomotion. Despite the success in gait analysis under controlled experimental settings using these sensors, their performance in realistic scenarios is unsatisfactory due to data dependency on sensor placement. In practice, the placement of mobile devices is unconstrained. In this paper, we propose a novel gait representation for accelerometer and gyroscope data which is both sensor orientation-invariant and highly discriminative to enable high-performance gait biometrics for real-world applications. We also adopt the i-vector paradigm, a state of the-art machine learning technique widely used for speaker recognition, to extract gait identities using the proposed gait representation. Performance studies using both the naturalistic McGill University gait dataset and the Osaka University gait dataset containing 744 subjects have shown dominant superiority of this novel gait biometrics approach compared to existing methods.
In this paper we investigate the problem of user authentication using keystroke biometrics. A new distance metric that is effective in dealing with the challenges intrinsic to keystroke dynamics data, i.e., scale variations, feature interactions and redundancies, and outliers is proposed. Our keystroke biometrics algorithms based on this new distance metric are evaluated on the CMU keystroke dynamics benchmark dataset and are shown to be superior to algorithms using traditional distance metrics. IntroductionWith the ever increasing demand for more secure access control in many of today's security applications, traditional methods such as PINs, tokens, or passwords fail to keep up with the challenges presented because they can be lost or stolen, which compromises the system security. [26] provides a natural choice for secure "password-free" computer access. Keystroke dynamics refers to the habitual patterns or rhythms an individual exhibits while typing on a keyboard input device. These rhythms and patterns of tapping are idiosyncratic [5], in the same way as handwritings or signatures, due to their similar governing neurophysiological mechanisms. As early as in the 19 th century, telegraph operators could recognize each other based on one's specific tapping style [18]. This suggests that keystroke dynamics contain sufficient information to serve as a potential biometric identifier to ascertain a specific keyboard user.Compared to other biometrics, keystroke biometrics has additional desirable properties due to its user-friendliness and non-intrusiveness. Keystroke dynamics data can be collected without a user's cooperation or even awareness.Continuous authentication is possible using keystroke dynamics just as a mere consequence of people's use of computers. Unlike many other biometrics, the temporal information of keystrokes can be collected to ascertain a user using only software and no additional hardware. In summary, keystroke dynamics biometrics enables a cost effective, user friendly, and continuous user authentication with potential for high accuracy.Although keystroke dynamics is governed by a person's neurophysiological pathway to be highly individualistic, it can also be influenced by his or her psychological state. As a "behavioral" biometrics [35], keystroke dynamics exhibits instabilities due to transient factors such as emotions, stress, and drowsiness etc [6]. It also depends on external factors, such as the input keyboard device used, possibly due to different layout of the keys. The keying times can be noisy with outliers. As keystroke biometrics exploits the habitual rhythm in typing, it has been observed that keystrokes of frequently typed words or strings show more consistency and are better discerners [22][38].Keystroke biometrics can use "static text", where keystroke dynamics of a specific pre-enrolled text, such as a password, is analyzed at a certain time, e.g., during the log on process. For more secure applications, "free text" should be used to continuously authenticate a user...
These results demonstrate the viability of our system as an alternative modality of communication for a multitude of applications including: persons with speech impairments following a laryngectomy; military personnel requiring hands-free covert communication; or the consumer in need of privacy while speaking on a mobile phone in public.
Each year thousands of individuals require surgical removal of their larynx (voice box) due to trauma or disease, and thereby require an alternative voice source or assistive device to verbally communicate. Although natural voice is lost after laryngectomy, most muscles controlling speech articulation remain intact. Surface electromyographic (sEMG) activity of speech musculature can be recorded from the neck and face, and used for automatic speech recognition to provide speech-to-text or synthesized speech as an alternative means of communication. This is true even when speech is mouthed or spoken in a silent (subvocal) manner, making it an appropriate communication platform after laryngectomy. In this study, 8 individuals at least 6 months after total laryngectomy were recorded using 8 sEMG sensors on their face (4) and neck (4) while reading phrases constructed from a 2,500-word vocabulary. A unique set of phrases were used for training phoneme-based recognition models for each of the 39 commonly used phonemes in English, and the remaining phrases were used for testing word recognition of the models based on phoneme identification from running speech. Word error rates were on average 10.3% for the full 8-sensor set (averaging 9.5% for the top 4 participants), and 13.6% when reducing the sensor set to 4 locations per individual (n=7). This study provides a compelling proof-of-concept for sEMG-based alaryngeal speech recognition, with the strong potential to further improve recognition performance.
Recent breakthroughs in deep learning and artificial intelligence technologies have enabled numerous mobile applications. While traditional computation paradigms rely on mobile sensing and cloud computing, deep learning implemented on mobile devices provides several advantages. These advantages include low communication bandwidth, small cloud computing resource cost, quick response time, and improved data privacy. Research and development of deep learning on mobile and embedded devices has recently attracted much attention. This paper provides a timely review of this fast-paced field to give the researcher, engineer, practitioner, and graduate student a quick grasp on the recent advancements of deep learning on mobile devices. In this paper, we discuss hardware architectures for mobile deep learning, including Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuit (ASIC), and recent mobile Graphic Processing Units (GPUs). We present Size, Weight, Area and Power (SWAP) considerations and their relation to algorithm optimizations, such as quantization, pruning, compression, and approximations that simplify computation while retaining performance accuracy. We cover existing systems and give a state-of-the-industry review of TensorFlow, MXNet, Mobile AI Compute Engine (MACE), and Paddle-mobile deep learning platform. We discuss resources for mobile deep learning practitioners, including tools, libraries, models, and performance benchmarks. We present applications of various mobile sensing modalities to industries, ranging from robotics, healthcare and multimedia, biometrics to autonomous drive and defense. We address the key deep learning challenges to overcome, including low quality data, and small training/adaptation data sets. In addition, the review provides numerous citations and links to existing code bases implementing various technologies. These resources lower the user's barrier to entry into the field of mobile deep learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.