Human activity recognition (HAR) has emerged as a significant area of research due to its numerous possible applications, including ambient assisted living, healthcare, abnormal behaviour detection, etc. Recently, HAR using WiFi channel state information (CSI) has become a predominant and unique approach in indoor environments compared to others (i.e., sensor and vision) due to its privacy-preserving qualities, thereby eliminating the need to carry additional devices and providing flexibility of capture motions in both line-of-sight (LOS) and non-line-of-sight (NLOS) settings. Existing deep learning (DL)-based HAR approaches usually extract either temporal or spatial features and lack adequate means to integrate and utilize the two simultaneously, making it challenging to recognize different activities accurately. Motivated by this, we propose a novel DL-based model named spatio-temporal convolution with nested long short-term memory (STC-NLSTMNet), with the ability to extract spatial and temporal features concurrently and automatically recognize human activity with very high accuracy. The proposed STC-NLSTMNet model is mainly comprised of depthwise separable convolution (DS-Conv) blocks, feature attention module (FAM) and NLSTM. The DS-Conv blocks extract the spatial features from the CSI signal and add feature attention modules (FAM) to draw attention to the most essential features. These robust features are fed into NLSTM as inputs to explore the hidden intrinsic temporal features in CSI signals. The proposed STC-NLSTMNet model is evaluated using two publicly available datasets: Multi-environment and StanWiFi. The experimental results revealed that the STC-NLSTMNet model achieved activity recognition accuracies of 98.20% and 99.88% on Multi-environment and StanWiFi datasets, respectively. Its activity recognition performance is also compared with other existing approaches and our proposed STC-NLSTMNet model significantly improves the activity recognition accuracies by 4% and 1.88%, respectively, compared to the best existing method.
Cognitive impairment has a significantly negative impact on global healthcare and the community. Holding a person’s cognition and mental retention among older adults is improbable with aging. Early detection of cognitive impairment will decline the most significant impact of extended disease to permanent mental damage. This paper aims to develop a machine learning model to detect and differentiate cognitive impairment categories like severe, moderate, mild, and normal by analyzing neurophysical and physical data. Keystroke and smartwatch have been used to extract individuals’ neurophysical and physical data, respectively. An advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) is proposed to classify the cognitive severity level (absence, mild, moderate, and severe) based on the Standardised Mini-Mental State Examination (SMMSE) questionnaire scores. The statistical method “Pearson’s correlation” and the wrapper feature selection technique have been used to analyze and select the best features. Then, we have conducted our proposed algorithm GBM on those features. And the result has shown an accuracy of more than 94%. This paper has added a new dimension to the state-of-the-art to predict cognitive impairment by implementing neurophysical data and physical data together.
COVID-19 is a life-threatening infectious disease that has become a pandemic. The virus grows within the lower respiratory tract, where early-stage symptoms (such as cough, fever, and sore throat) develop, and then it causes a lung infection (pneumonia). This paper proposes a new artificial testing methodology to determine whether a patient has been infected by COVID-19. We have presented a prediction model based on a convolutional neural network (CNN) and our own developed mathematical equation-based algorithm named SymptomNet. The CNN algorithm classifies lung infections (pneumonia) using frontal chest X-ray images, and the symptom analysis algorithm (SymptomNet) predicts the possibility of COVID-19 infection from the developed symptoms in a patient. By combining the CNN image classifier method and SymptomNet algorithm, we have developed a model that predicts COVID-19 patients with an approximate accuracy of 96%. Ten out of the 13 symptoms were significantly correlated to the COVID-19 disease. Specially, fever, cough, body chills, shortness of breath, muscle pain, and sore throat were shown to be significantly related (r = 0.20; p = 0.001, r = 0.20; p < 0.001, r = 0.22; p < 0.001, r = 0.16; p < 0.001, r = −0.45; p < 0.001, r = −0.35; p < 0.001, respectively). In this model, the CNN classifier has an accuracy of approximately 96% (training loss = 0.1311, training accuracy = 0.9596, validation loss: 0.2754, and validation accuracy of 0.9273, F1-score: 94.16, precision: 91.33), and the SymptomNet algorithm has an accuracy of 97% (485 successful predictions out of 500 samples). This research work obtained promising accuracy while predicting COVID-19-infected patients. The proposed model can be ubiquitously used at a low cost and achieve high accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.