Automated recognition of human activities or actions has great significance as it incorporates wide-ranging applications, including surveillance, robotics, and personal health monitoring. Over the past few years, many computer vision-based methods have been developed for recognizing human actions from RGB and depth camera videos. These methods include space-time trajectory, motion encoding, key poses extraction, space-time occupancy patterns, depth motion maps, and skeleton joints. However, these camera-based approaches are affected by background clutter and illumination changes and applicable to a limited field of view only. Wearable inertial sensors provide a viable solution to these challenges but are subject to several limitations such as location and orientation sensitivity. Due to the complementary trait of the data obtained from the camera and inertial sensors, the utilization of multiple sensing modalities for accurate recognition of human actions is gradually increasing. This paper presents a viable multimodal feature-level fusion approach for robust human action recognition, which utilizes data from multiple sensors, including RGB camera, depth sensor, and wearable inertial sensors. We extracted the computationally efficient features from the data obtained from RGB-D video camera and inertial body sensors. These features include densely extracted histogram of oriented gradient (HOG) features from RGB/depth videos and statistical signal attributes from wearable sensors data. The proposed human action recognition (HAR) framework is tested on a publicly available multimodal human action dataset UTD-MHAD consisting of 27 different human actions. K-nearest neighbor and support vector machine classifiers are used for training and testing the proposed fusion model for HAR. The experimental results indicate that the proposed scheme achieves better recognition results as compared to the state of the art. The feature-level fusion of RGB and inertial sensors provides the overall best performance for the proposed system, with an accuracy rate of 97.6%.
Electric Vehicles' Controller Area Network (CAN) bus serves as a legacy protocol for in-vehicle network communication. Simplicity, robustness, and suitability for real-time systems are the salient features of CAN bus. Unfortunately, the CAN bus protocol is vulnerable to various cyberattacks due to the lack of a message authentication mechanism in the protocol itself, paving the way for attackers to penetrate the network. This paper proposes a new effective anomaly detection model based on a modified one-class support vector machine in the CAN traffic. The proposed model makes use of an improved algorithm, known as the modified bat algorithm, to find the most accurate structure in the offline training. To evaluate the effectiveness of the proposed method, CAN traffic is logged from an unmodified licensed electric vehicle in normal operation to generate a dataset for each message ID and a corresponding occurrence frequency without any attacks. In addition, to measure the performance and superiority of the proposed method compared to the other two famous CAN bus anomaly detection algorithms such as Isolation Forest and classical one-class support vector machine, we provided Receiver Operating Characteristic (ROC) for each method to quantify the correctly classified windows in the test sets containing attacks. Experimental results indicate that the proposed method achieved the highest rate of True Positive Rate (TPR) and lowest False Positive Rate (FPR) for anomaly detection compared to the other two algorithms. Moreover, in order to show that the proposed method can be applied to other datasets, we used two recent popular public datasets in the scope of CAN bus traffic anomaly detection. Benchmarking with more CAN bus traffic datasets proves the independency of the proposed method from the meaning of each message ID and data field that make the model adaptable with different CAN datasets.INDEX TERMS Electric vehicles, controller area network (CAN Bus), anomaly detection, one-class support vector machine, optimization algorithm.
The controller area network (CAN) is the most widely used intra-vehicular communication network in the automotive industry. Because of its simplicity in design, it lacks most of the requirements needed for a security-proven communication protocol. However, a safe and secured environment is imperative for autonomous as well as connected vehicles. Therefore CAN security is considered one of the important topics in the automotive research community. In this paper, we propose a fourstage intrusion detection system that uses the chi-squared method and can detect any kind of strong and weak cyber attacks in a CAN. This work is the first-ever graph-based defense system proposed for the CAN. Our experimental results show that we have a very low 5.26% misclassification for denial of service (DoS) attack, 10% misclassification for fuzzy attack, 4.76% misclassification for replay attack, and no misclassification for spoofing attack. In addition, the proposed methodology exhibits up to 13.73% better accuracy compared to existing ID sequence-based methods.
Diabetic patients are at the risk of developing different eye diseases i.e., diabetic retinopathy (DR), diabetic macular edema (DME) and glaucoma. DR is an eye disease that harms the retina and DME is developed by the accumulation of fluid in the macula, while glaucoma damages the optic disk and causes vision loss in advanced stages. However, due to slow progression, the disease shows few signs in early stages, hence making disease detection a difficult task. Therefore, a fully automated system is required to support the detection and screening process at early stages. In this paper, an automated disease localization and segmentation approach based on Fast Region-based Convolutional Neural Network (FRCNN) algorithm with fuzzy k-means (FKM) clustering is presented. The FRCNN is an object detection approach that requires the bounding-box annotations to work; however, datasets do not provide them, therefore, we have generated these annotations through ground-truths. Afterward, FRCNN is trained over the annotated images for localization that are then segmented-out through FKM clustering. The segmented regions are then compared against the ground-truths through intersection-over-union operations. For performance evaluation, we used the Diaretdb1, MESSIDOR, ORIGA, DR-HAGIS, and HRF datasets. A rigorous comparison against the latest methods confirms the efficacy of the approach in terms of both disease detection and segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.