The tremendous applications of human activity recognition are surging its span from health monitoring systems to virtual reality applications. Thus, the automatic recognition of daily life activities has become signi cant for numerous applications. In recent years, many datasets have been proposed to train the machine learning models for e cient monitoring and recognition of human daily living activities.However, the performance of machine learning models in activity recognition is crucially affected when there are incomplete activities in a dataset, i.e., having missing samples in dataset captures. Therefore, in this work, we propose a methodology for extrapolating the missing samples of a dataset to better recognize the human daily living activities. The proposed method e ciently pre-processes the data captures and utilizes the k-Nearest Neighbors (KNN) imputation technique to extrapolate the missing samples in dataset captures. The proposed methodology elegantly extrapolated a similar pattern of activities as they were in the real dataset.
Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subject to several limitations such as location and orientation sensitivity. Due to the complementary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human actions is gradually increasing. This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 27 different human actions. K-nearest neighbor and support vector machine classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provides the overall best performance for the proposed system, with an accuracy rate of 97.6%.
Smartphones are context-aware devices that provide a compelling platform for ubiquitous computing and assist users in accomplishing many of their routine tasks anytime and anywhere, such as sending and receiving emails. The nature of tasks conducted with these devices has evolved with the exponential increase in the sensing and computing capabilities of a smartphone. Due to the ease of use and convenience, many users tend to store their private data, such as personal identifiers and bank account details, on their smartphone. However, this sensitive data can be vulnerable if the device gets stolen or lost. A traditional approach for protecting this type of data on mobile devices is to authenticate users with mechanisms such as PINs, passwords, and fingerprint recognition. However, these techniques are vulnerable to user compliance and a plethora of attacks, such as smudge attacks. The work in this paper addresses these challenges by proposing a novel authentication framework, which is based on recognizing the behavioral traits of smartphone users using the embedded sensors of smartphone, such as Accelerometer, Gyroscope and Magnetometer. The proposed framework also provides a platform for carrying out multi-class smart user authentication, which provides different levels of access to a wide range of smartphone users. This work has been validated with a series of experiments, which demonstrate the effectiveness of the proposed framework.
Smartphones are inescapable devices, which are becoming more and more intelligent and context-aware with emerging sensing, networking and computing capabilities. They offer a captivating platform to the users for performing a wide variety of tasks including socializing, communication, sending or receiving emails, storing and accessing personal data etc. at anytime and anywhere. Nowadays, loads of people tend to store different types of private and sensitive data in their smartphones including bank account details, personal identifiers, accounts credentials, and credit card details. A lot of people keep their personal e-accounts logged in all the time in their mobile devices. Hence these mobile devices are prone to different security and privacy threats and attacks from the attackers. Commonly used approaches for securing mobile devices such as passcode, PINs, pattern lock, face recognition, and fingerprint scan are vulnerable and exposed to several attacks including smudge attacks, side-channel attacks, and shoulder-surfing attacks. To address these challenges, a novel continuous authentication scheme is presented in this study, which recognizes smartphone users on the basis of their physical activity patterns using accelerometer, gyroscope, and magnetometer sensors of smartphone. A series of experiments are performed for user recognition using different machine learning classifiers, where six different activities are analyzed for the multiple locations of smartphone on the user's body. SVM classifier achieved the best results for user recognition with an overall average accuracy of 97.95%. A comprehensive analysis of the user recognition results validates the efficiency of the proposed scheme.
Electrocardiogram (ECG) signals play a vital role in diagnosing and monitoring patients suffering from various cardiovascular diseases (CVDs). This research aims to develop a robust algorithm that can accurately classify the electrocardiogram signal even in the presence of environmental noise. A one-dimensional convolutional neural network (CNN) with two convolutional layers, two down-sampling layers, and a fully connected layer is proposed in this work. The same 1D data was transformed into two-dimensional (2D) images to improve the model’s classification accuracy. Then, we applied the 2D CNN model consisting of input and output layers, three 2D-convolutional layers, three down-sampling layers, and a fully connected layer. The classification accuracy of 97.38% and 99.02% is achieved with the proposed 1D and 2D model when tested on the publicly available Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Both proposed 1D and 2D CNN models outperformed the corresponding state-of-the-art classification algorithms for the same data, which validates the proposed models’ effectiveness.
The Internet of Things is a rapidly growing paradigm for smart cities that provides a way of communication, identification, and sensing capabilities among physically distributed devices. With the evolution of the Internet of Things (IoTs), user dependence on smart systems and services, such as smart appliances, smartphone, security, and healthcare applications, has been increased. This demands secure authentication mechanisms to preserve the users’ privacy when interacting with smart devices. This paper proposes a heterogeneous framework “ADLAuth” for passive and implicit authentication of the user using either a smartphone’s built-in sensor or wearable sensors by analyzing the physical activity patterns of the users. Multiclass machine learning algorithms are applied to users’ identity verification. Analyses are performed on three different datasets of heterogeneous sensors for a diverse number of activities. A series of experiments have been performed to test the effectiveness of the proposed framework. The results demonstrate the better performance of the proposed scheme compared to existing work for user authentication.
Cryptography is commonly used to secure communication and data transmission over insecure networks through the use of cryptosystems. A cryptosystem is a set of cryptographic algorithms offering security facilities for maintaining more cover-ups. A substitution-box (S-box) is the lone component in a cryptosystem that gives rise to a nonlinear mapping between inputs and outputs, thus providing confusion in data. An S-box that possesses high nonlinearity and low linear and differential probability is considered cryptographically secure. In this study, a new technique is presented to construct cryptographically strong 8×8 S-boxes by applying an adjacency matrix on the Galois field GF(28). The adjacency matrix is obtained corresponding to the coset diagram for the action of modular group PSL(2,Z) on a projective line PL(F7) over a finite field F7. The strength of the proposed S-boxes is examined by common S-box tests, which validate their cryptographic strength. Moreover, we use the majority logic criterion to establish an image encryption application for the proposed S-boxes. The encryption results reveal the robustness and effectiveness of the proposed S-box design in image encryption applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.