Voice-based authentication is prevalent on smart devices to verify the legitimacy of users, but is vulnerable to replay attacks. In this paper, we propose to leverage the distinctive chest motions during speaking to establish a secure multi-factor authentication system, named ChestLive. Compared with other biometric-based authentication systems, ChestLive does not require users to remember any complicated information (e.g., hand gestures, doodles) and the working distance is much longer (30cm). We use acoustic sensing to monitor chest motions with a built-in speaker and microphone on smartphones. To obtain fine-grained chest motion signals during speaking for reliable user authentication, we derive Channel Energy (CE) of acoustic signals to capture the chest movement, and then remove the static and non-static interference from the aggregated CE signals. Representative features are extracted from the correlation between voice signal and corresponding chest motion signal. Unlike learning-based image or speech recognition models with millions of available training samples, our system needs to deal with a limited number of samples from legitimate users during enrollment. To address this problem, we resort to meta-learning, which initializes a general model with good generalization property that can be quickly fine-tuned to identify a new user. We implement ChestLive as an application and evaluate its performance in the wild with 61 volunteers using their smartphones. Experiment results show that ChestLive achieves an authentication accuracy of 98.31% and less than 2% of false accept rate against replay attacks and impersonation attacks. We also validate that ChestLive is robust to various factors, including training set size, distance, angle, posture, phone models, and environment noises.
Diaper wetness monitoring is essential in various situations (e.g., babies and patients) to guarantee hygiene and avoid embarrassment. Existing diaper wetness monitoring methods include indicator lines, special sensors, and RFID, which require modifications on every diaper piece and cannot be easily checked under visual occlusions (e.g., trousers). In this paper, we introduce Wet-Ra, a contactless, ubiquitous, and user-friendly diaper wetness monitoring system based on RF signals. To extract informative features for wetness detection from RF signals, we construct Continuous-Radio-Snapshot and build corresponding signal representations that capture the distinct patterns of diapers of different wetness levels. We refine the signal representation by eliminating multi-path interference from the environment and mitigating the smearing effect with wavelet multisynchrosqueezing transform. To expand the usability of Wet-Ra, we build a transferable model that yields robust detection results in diversified environments and for new users. We conduct extensive experiments to evaluate Wet-Ra with 47 volunteers in 7 different rooms with three off-the-shelf diaper brands. Experiment results confirm that Wet-Ra can accurately identify diaper wetness in the real environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.