In this study, we propose the gesture recognition algorithm using support vector machines (SVM) and histogram of oriented gradient (HOG). Besides, we also use the CNN model to classify gestures. We approach and select techniques of applying problem controlling for the robotic system. The goal of the algorithm is to detect gestures with real-time processing speed, minimize interference, and reduce the ability to capture unintentional gestures. Static gesture controls are used in this study including on, off, increasing, and decreasing. Besides, it uses motion gestures including turning on the status switch and increasing and decreasing the volume. Results show that the algorithm is up to 99% accuracy with a 70-millisecond execution time per frame that is suitable for industrial applications.
Developing self-driving cars is an important foundation for the development of intelligent transportation systems with advanced telecommunications network infrastructure such as 6G networks. The paper mentions two main problems, namely, lane detection and obstacle detection (road signs, traffic lights, vehicles ahead, etc.) through image processing algorithms. To solve problems such as low detection accuracy of traditional image processing methods and poor real-time performance of methods based on deep learning methods, lane and object detection algorithm barriers for smart traffic are proposed. We first convert the distorting image caused by the camera and use a threshold algorithm for the lane detection algorithm. The image with a top-down view is then determined through the extraction of a region of interest and inverse perspective transform. Finally, we implement the sliding window method to determine pixels belonging to each lane and adapt it to a quadratic equation. YOLO algorithm is suitable for identifying many types of obstacles for identification problems. Finally, we use real-time videos and the TuSimple dataset to perform simulations for the proposed algorithm. The simulation results show that the accuracy of the proposal for detecting lanes is 97.91% and the processing time is 0.0021 seconds. The accuracy of the proposal for detecting obstacles is 81.90%, and the processing time is 0.022 seconds. Compared with the traditional image processing method, the average accuracy and execution time of the proposed method are 89.90% and 0.024 seconds, which is a strong antinoise ability. The results prove that the proposed algorithm can be deployed for self-driving car systems with a high processing speed of the advanced network.
The paper proposes a system for identifying gestures and actions in smart homes. The proposed method is based on MobilenetV2 feature extraction combining with single shot detector (SSD) network. We used eleven types of gestures of walking, sitting down, falling back, wearing shoes, waving hands, falling down, smoking, baby crawling, standing up, reading, and typing for recognizing the gestures. In this system, the data are captured from the camera of mobile devices that are used to detect the object. The results are obtained objects on the frame by a bounding box. The results show that the system meets the requirements with an accuracy of over 90% that is suitable for real application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.