“…However, OFDM is mainly manifested in the CSI signal as introduced in Subsection 2. SoundSense [73] BodyScope [74] EarSense [75] HearFit [76] Liang and Thomaz [77] DopLink [79] Dolphin [80] SoundWrite [83] LLAP [84] VSkin [81] Vernier [82] UltraGesture [85] RobuCIR [86] Acousticcardiogram [89] Localization and Navigation…”
Section: Orthogonal Frequency Division Multiplexing (Ofdm)mentioning
confidence: 99%
“…After obtaining the WiFi or acoustic signal, the next step is to characterize it using various sensing techniques. Similar to WiFi sensing, typical applications based on acoustic sensing include daily actions monitoring [73][74][75][76][77][78] , gesture and hand movements recognition [79][80][81][82][83][84][85][86][87] , health caring [88][89][90][91][92] , localization and navigation [93][94][95][96][97][98] and privacy and security [99][100][101][102][103][104][105][106][107][108][109][110][111] . In the following, we will also introduce the basic content of signals and other characterization methods.…”
With the increasing pervasiveness of mobile devices such as smartphones, smart TVs, and wearables, smart sensing, transforming the physical world into digital information based on various sensing medias, has drawn researchers' great attention. Among different sensing medias, WiFi and acoustic signals stand out due to their ubiquity and zero hardware cost. Based on different basic principles, researchers have proposed different technologies for sensing applications with WiFi and acoustic signals covering human activity recognition, motion tracking, indoor localization, health monitoring, and the like. To enable readers to get a comprehensive understanding of ubiquitous wireless sensing, we conduct a survey of existing work to introduce their underlying principles, proposed technologies, and practical applications. Besides we also discuss some open issues of this research area. Our survey reals that as a promising research direction, WiFi and acoustic sensing technologies can bring about fancy applications, but still have limitations in hardware restriction, robustness, and applicability.
“…However, OFDM is mainly manifested in the CSI signal as introduced in Subsection 2. SoundSense [73] BodyScope [74] EarSense [75] HearFit [76] Liang and Thomaz [77] DopLink [79] Dolphin [80] SoundWrite [83] LLAP [84] VSkin [81] Vernier [82] UltraGesture [85] RobuCIR [86] Acousticcardiogram [89] Localization and Navigation…”
Section: Orthogonal Frequency Division Multiplexing (Ofdm)mentioning
confidence: 99%
“…After obtaining the WiFi or acoustic signal, the next step is to characterize it using various sensing techniques. Similar to WiFi sensing, typical applications based on acoustic sensing include daily actions monitoring [73][74][75][76][77][78] , gesture and hand movements recognition [79][80][81][82][83][84][85][86][87] , health caring [88][89][90][91][92] , localization and navigation [93][94][95][96][97][98] and privacy and security [99][100][101][102][103][104][105][106][107][108][109][110][111] . In the following, we will also introduce the basic content of signals and other characterization methods.…”
With the increasing pervasiveness of mobile devices such as smartphones, smart TVs, and wearables, smart sensing, transforming the physical world into digital information based on various sensing medias, has drawn researchers' great attention. Among different sensing medias, WiFi and acoustic signals stand out due to their ubiquity and zero hardware cost. Based on different basic principles, researchers have proposed different technologies for sensing applications with WiFi and acoustic signals covering human activity recognition, motion tracking, indoor localization, health monitoring, and the like. To enable readers to get a comprehensive understanding of ubiquitous wireless sensing, we conduct a survey of existing work to introduce their underlying principles, proposed technologies, and practical applications. Besides we also discuss some open issues of this research area. Our survey reals that as a promising research direction, WiFi and acoustic sensing technologies can bring about fancy applications, but still have limitations in hardware restriction, robustness, and applicability.
“…Wang et al. [ 24 ] solved the frequency selective fading problem caused by multipath effects by periodically transmitting acoustic signals of different frequencies. Additionally, they solved the challenge of insufficient data by automatically generating data based on the correlation between CIR measurements and gesture changes, achieving a breakthrough in the limitations of acoustic gesture recognition in terms of accuracy and robustness.…”
With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and control. However, the recognition of everyday behavioral sign language of a certain population of deaf people presents a challenge to sensing technology. Ubiquitous acoustics offer new ideas on how to perceive everyday behavior. The advantages of a low sampling rate, slow propagation speed, and easy access to the equipment have led to the widespread use of acoustic signal-based gesture recognition sensing technology. Therefore, this paper proposed a contactless gesture and sign language behavior sensing method based on ultrasonic signals—UltrasonicGS. The method used Generative Adversarial Network (GAN)-based data augmentation techniques to expand the dataset without human intervention and improve the performance of the behavior recognition model. In addition, to solve the problem of inconsistent length and difficult alignment of input and output sequences of continuous gestures and sign language gestures, we added the Connectionist Temporal Classification (CTC) algorithm after the CRNN network. Additionally, the architecture can achieve better recognition of sign language behaviors of certain people, filling the gap of acoustic-based perception of Chinese sign language. We have conducted extensive experiments and evaluations of UltrasonicGS in a variety of real scenarios. The experimental results showed that UltrasonicGS achieved a combined recognition rate of 98.8% for 15 single gestures and an average correct recognition rate of 92.4% and 86.3% for six sets of continuous gestures and sign language gestures, respectively. As a result, our proposed method provided a low-cost and highly robust solution for avoiding human-to-human contact.
“…But in few cases, LSTM framework outperformed other versions of RNN based on accuracy classification. Wang et al [23] utilized sound features for the HAR system. In this framework CNN is integrated with LSTM to cap-…”
Human Activity Recognition (HAR) has reached its new dimension with the support of Internet of Things (IoT) and Artificial Intelligence (AI). To observe human activities, motion sensors like accelerometer or gyroscope can be integrated with microcontrollers to collect all the inputs and send to the cloud with the help of IoT transceivers. These inputs give the characteristics such as, angular velocity of movements, acceleration and apply them for an effective HAR. But reaching high recognition rate with less complicated computational overhead still represents a problem in the research. To solve this aforementioned issue, this work proposes a novel ensembling of Capsule Networks (CN) and modified Gated Recurrent Units (MGRU) with Extreme Learning Machine (ELM) for an effective HAR classification based on data collected using IoT systems called Ensemble Capsule Gated (ECG)-Networks (NETS). The proposed system uses Capsule networks for spatio-feature extraction and modified (Gated Recurrent Unit) GRU for temporal feature extraction. The powerful feed forward training networks are then employed to train these features for human activity recognition. The proposed model is validated on real time IoT data and WISDM datasets. Experimental results demonstrates that proposed model achieves better results comparatively higher than existing (Deep Learning) DL models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.