The goal of this paper is to assess the vulnerability of MEMS-based gyroscopes to targeted ultrasonic attacks. Towards this objective, a surface micromachined planar MEMS gyroscope is fixed in space and subjected to ultrasonic waves with frequencies near its driving frequency. The ultrasonic input is shown to produce deceptive low-frequency angular velocity readings in the yaw direction. Using physicsbased mathematical model of the gyroscope, it is shown that the misalignment between the sensing and driving axes of the gyroscope is the main culprit behind the vulnerability of the gyroscope to ultrasonic attacks. It is also concluded that ultrasonic attacks on MEMS gyroscopes can impose high-security risks. In addition to the attack being barely audible, the resulting deceptive angular velocity signals have a very low frequency content which cannot be attenuated by adding a low-pass filter. Furthermore, the current approach implemented to eliminate unwanted vibrations from the output signal of the MEMS gyroscopes by using an identical proof mass to perform differential measurements is clearly ineffective in shielding the gyroscope from ultrasonic attacks. As such, new measures have to be taken to protect MEMS gyroscopes from targeted acoustic attacks.INDEX TERMS Acoustic attack, gyroscope, MEMS, security.
It has been shown in previous studies that haptic guidance improves the learning outcomes of handwriting motor skills. Full and partial haptic guidance are developed and evaluated in the literature. In this paper, we present two experimental studies to examine whether combining full and partial haptic guidance is more effective for improving handwriting skills than merely full or partial guidance methods. Experiment I, with 22 participants, compares the effectiveness of merely full and partial haptic guidance methods towards improving learning outcomes of Arabic handwriting. Even though haptic guidance in general is found to be effective and pleasant by all participants, experiment I concludes that there are no statistically significant differences in the learning outcomes between full and partial haptic guidance. Experiment II investigates whether a combination of full and partial haptic guidance could further improve the learning outcomes, compared to merely full or partial haptic guidance. The learning outcomes and quality of experience are measured to evaluate each group's performance. Results from experiment II demonstrate that the combination of full and partial haptic guidance results in statistically significant improvements in the quality of handwriting, compared to mere full or partial haptic guidance. In particular, starting with partial haptic guidance at early stage of learning and then using full guidance at intermediate/advanced learning stages seemed to be the most effective. This implies that partial haptic guidance is more effective to learn the gross shape of handwriting skills (at early stages of the learning process) whereas full haptic guidance is more effective to learn the fine details of the handwriting skills (at intermediate or advanced stage of learning). Therefore, partial-then-full haptic guidance seems to be the most effective to improve learning outcomes.
Most people touch their faces unconsciously, for instance to scratch an itch or to rest one’s chin in their hands. To reduce the spread of the novel coronavirus (COVID-19), public health officials recommend against touching one’s face, as the virus is transmitted through mucous membranes in the mouth, nose and eyes. Students, office workers, medical personnel and people on trains were found to touch their faces between 9 and 23 times per hour. This paper introduces FaceGuard, a system that utilizes deep learning to predict hand movements that result in touching the face, and provides sensory feedback to stop the user from touching the face. The system utilizes an inertial measurement unit (IMU) to obtain features that characterize hand movement involving face touching. Time-series data can be efficiently classified using 1D-Convolutional Neural Network (CNN) with minimal feature engineering; 1D-CNN filters automatically extract temporal features in IMU data. Thus, a 1D-CNN based prediction model is developed and trained with data from 4,800 trials recorded from 40 participants. Training data are collected for hand movements involving face touching during various everyday activities such as sitting, standing, or walking. Results showed that while the average time needed to touch the face is 1,200 ms, a prediction accuracy of more than 92% is achieved with less than 550 ms of IMU data. As for the sensory response, the paper presents a psychophysical experiment to compare the response time for three sensory feedback modalities, namely visual, auditory, and vibrotactile. Results demonstrate that the response time is significantly smaller for vibrotactile feedback (427.3 ms) compared to visual (561.70 ms) and auditory (520.97 ms). Furthermore, the success rate (to avoid face touching) is also statistically higher for vibrotactile and auditory feedback compared to visual feedback. These results demonstrate the feasibility of predicting a hand movement and providing timely sensory feedback within less than a second in order to avoid face touching.
Successful manipulation of unknown objects requires an understanding of their physical properties. Infrared thermography has the potential to provide real-time, contactless material characterization for unknown objects. In this paper, we propose an approach that utilizes active thermography and custom multi-channel neural networks to perform classification between samples and regression towards the density property. With the help of an off-the-shelf technology to estimate the volume of the object, the proposed approach is capable of estimating the weight of the unknown object. We show the efficacy of the infrared thermography approach to a set of ten commonly used materials to achieve a 99.1% R 2 -fit for predicted versus actual density values. The system can be used with tele-operated or autonomous robots to optimize grasping techniques for unknown objects without touching them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.