Human activity monitoring is an exciting research area to assist independent living among disabled and elderly population. Various techniques have been proposed to recognise human activities, such as exploiting sensors, cameras, wearables, and contactless microwave sensing. Among these, the microwave sensing has recently gained significant attention due to its merit to solve the privacy concerns of cameras and discomfort caused by wearables. However, the existing microwave sensing techniques have a basic disadvantage of requiring controlled and ideal settings for high-accuracy activity detections, which restricts its wide adoptions in non-line-of-sight (Non-LOS) environments. Here, we propose a concept of intelligent wireless walls (IWW) to ensure high-precision activity monitoring in complex environments wherein the conventional microwave sensing is invalid. The IWW is composed of a reconfigurable intelligent surface (RIS) that can perform beam steering and beamforming, and machine learning algorithms that can automatically detect the human activities with high accuracy. Two complex environments are considered: one is a corridor junction scenario with transmitter and receiver in separate corridor sections and the other is a multi-floor scenario wherein the transmitter and receiver are placed on two different floors of a building. In each of the aforementioned environments, three distinct body movements are considered namely, sitting, standing, and walking. Two subjects, one male and one female perform these activities in both environments. It is demonstrated that IWW provide a maximum detection gain of 28% in multi-floor scenario and 25% in corridor junction scenario as compared to traditional microwave sensing without RIS.
Complex software systems that support organizations are updated regularly, which can erode system architectures. Moreover, documentation is rarely synchronized with the changes to the software system. This creates a slew of issues for future software maintenance. To this goal, information extraction tools use exact approaches to extract entities and their corresponding relationships from source code. Such exact approaches extract all features, including those that are less prominent and may not be significant for modularization. In order to resolve the issue, this work proposes an enhanced approximate information extraction approach, namely, fact extractor system for Java applications (FESJA) that aims to automate software modularization using a fact extraction system. The proposed FESJA technique extracts all the entities along with their corresponding more dominant formal and informal relationships from a Java source code. Results demonstrate the improved performance of FESJA, by extracting 74 (classes), 43 (interfaces), and 31 (enumeration), in comparison with eminent information extraction techniques.
Non-invasive indoor human activity detection using radio waves has attracted the interest of researchers, contributing to a range of new applications including smart healthcare. Localisation of activities can assist in developing advanced healthcare systems able to identify the location of patients. Radio frequencies have been shown in numerous studies as a non-invasive method to identify human activity. This is achieved by observing the signal propagation described in the Channel State Information (CSI). This paper presents experimental results using Universal Software-Defined Radio Peripheral (USRP) devices to identify and localise a single human subject performing activities by utilizing the CSI of radio frequencies. The experiments are carried out to retrieve CSI samples observing a single subject perform no-activity, sitting, standing, and leaning forward actions in various positions in a room. Additional CSI is captured for the subject walking in two directions across the observed area. Giving a total of 6 activities spanning the monitored area. CSI is also collected while the monitored area is empty for further comparison. Artificial intelligence is used to make classifications on collected CSI. The proposed approach uses a Super Learner (SL) algorithm that can identify the location of different activities with 96% accuracy, outperforming existing benchmark approaches.
Sign language is a mean of communication between the deaf community and hearing people, who use hand gestures, facial expressions, and body language to communicate. It has the same level of complexity as spoken language, but it does not employ the same sentence structure as English. The motions in sign language are made up of a range of distinct hand and finger articulations that are occasionally synchronized with the head, face, and body. Existing sign language recognition systems are mainly camera-based, which have fundamental limitations of poor lighting conditions, potential training challenges with longer video sequence data, and serious privacy concerns. This study presents a contact-less and privacy-preserving British sign language (BSL) Recognition system using Radar and deep learning algorithms, namely Inceptionv3, VGG16, and VGG19. The six most common emotions are considered, namely confused, depressed, happy, hate, lonely, and sad. The collected data is represented in the form of spectrograms. The deep learning models, InceptionV3, VGG19, and VGG16 then extract spatiotemporal features from the Spectrogram. Finally, the BSL emotions are accurately identified by classifying the Spectrograms, into the considered emotions signs. The simulation results demonstrate that a maximum classifying accuracy of 93.33% is obtained using VGG16.
The elderly population is growing, and the health care system is experiencing a strain on services provided to the elderly. The recent COVID-19 pandemic has increased this strain and has resulted in an increased risk of exposure during visits to elderly homes. Increasing the desire to provide technological solutions to counteract this. Currently, there lack reliable real-time non-invasive sensing systems. This paper makes use of Radio Frequency sensing, where signal propagation is observed in Channel State Information (CSI) reports on Activities of Daily Living (ADLs). Real-time data has been collected for three classifications, "movement", "empty room", and "no activity". A filter is applied to reduce the noise of the CSI data. Then the mean, max, min, kurtosis, skew and standard deviation features are extracted from the CSI data. A machine learning model provides classification for the real-time monitoring system allowing detection of abnormalities in the expected ADLs of the elderly. The timing of classifications gives insight into the real-time capabilities of the system. The Random Forest algorithm is chosen to create the machine learning model based on accuracy and timing capabilities. The model was able to achieve an accuracy of 100 % on new unseen testing data with an average classification time of 7.31 milliseconds.
Sign language is a mean of communication between the deaf community and hearing people, who use hand gestures, facial expressions, and body language to communicate. It has the same level of complexity as spoken language, but it does not employ the same sentence structure as English. The motions in sign language are made up of a range of distinct hand and finger articulations that are occasionally synchronized with the head, face, and body. Existing sign language recognition systems are mainly camera-based, which have fundamental limitations of poor lighting conditions, potential training challenges with longer video sequence data, and serious privacy concerns. This study presents a contact-less and privacy-preserving British sign language (BSL) Recognition system using Radar and deep learning algorithms, namely Inceptionv3, VGG16, and VGG19. The six most common emotions are considered, namely confused, depressed, happy, hate, lonely, and sad. The collected data is represented in the form of spectrograms. The deep learning models, InceptionV3, VGG19, and VGG16 then extract spatiotemporal features from the Spectrogram. Finally, the BSL emotions are accurately identified by classifying the Spectrograms, into the considered emotions signs. The simulation results demonstrate that a maximum classifying accuracy of 93.33% is obtained using VGG16.<br>
Human activity monitoring is a fascinating area of research to support autonomous living in the aged and disabled community. Cameras, sensors, wearables, and non-contact microwave sensing have all been suggested in the past as methods for identifying distinct human activities. Microwave sensing is an approach that has lately attracted much interest since it has the potential to address privacy problems caused by cameras and discomfort caused by wearables, especially in the healthcare domain. A fundamental drawback of the current microwave sensing methods is non-line-of-sight environments. They need precise and regulated conditions to detect activity with high precision. In this paper, we suggest the intelligent reflecting surface (IRS) to assure high accuracy activity monitoring in complicated environments where traditional microwave sensing is ineffective. This work is based on reconfigurable IRS that can perform beam-forming/beam-steering and intelligent machine learning algorithms that can accurately recognise several human activities. For experimentation, the transmitter and receiver are positioned on two separate floors of a building. A complicated multi-floor scenario is created in order to test the IRS. Multiple activities such as sitting, standing, and walking are performed on the floor of the building by two individuals, a male and a female. It has been proven through an experiment that IRS technology increases detection accuracy by around 30% compared to conventional microwave sensing without IRS technology.
This paper presents a study on contactless localization for activity recognition based on radio-frequency sensing. The focus of this study is on the quantitative analysis of the collected data, which is in the form of channel state information (CSI). The proposed method utilizes a software-defined radio (SDR) system in combination with an ensemble learning technique to localize and identify daily living activities such as leaning, sitting, standing and walking. Specifically, SDR device, Universal Software Radio Peripheral (USRP) models X300/X310 are utilized to collect data on the aforementioned activities. The data is collected from an empty space and a participant performing daily living activities in different territories. The acquired data is labelled based on the region, zone and performed activity. The CSI data is subsequently preprocessed and fed into an ensemble-based machine learning algorithm for classification. Furthermore, the stability analysis of the proposed method is performed to evaluate its reliability and robustness. The performance of the algorithm is evaluated using various metrics, including a confusion matrix, accuracy, cross-validation score and training time [1], [2]. The results demonstrate that the proposed ensemblebased approach achieves a high accuracy of up to 90% in activity recognition, highlighting the effectiveness of the proposed method in contactless localization for activity recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.