Human looking forward to living in a modern and comfortable environment like smart houses. In this study, an effective user-friendly smart home prototype designed with low cost. The prototype contains eight of Light Emitting Diode (LED) considered as home appliances and controlled in real-time using eight suggested hand cases. The hand cases have different position regarded to head and shoulder levels. The hand position is detected using a new suggested algorithm programmed in Matlab software. Viola-Jones method used to detect hand in a complex background (hand with a different background) by training computer using positive (hand) and negative (non-hand) image datasets. To make computer training faster and accurate, a new idea depends on a skin detection used before computer training to determine the location and size of all positive images automatically. The LEDs in prototype switched ON/OFF using the suggested hand cases in a fast time. Where the response time of LEDs to hand cases was 0.43 second.
Static hand gesture recognition is critical in the development of a system for human-computer interaction. Many human-computer interactions, such as human-robot interaction, game control, control of smart home devices, and others, use hand gestures as a fundamental and natural language of the body. The direction of rotation of static hand gestures is the subject of this research, and the focus is on six degrees of rotation (0°, 45°, 90°, 180°, 270°, and 315°). This work presents an ideal approach that can recognize the angle of hand gestures based on the Aggregate Channel Features (ACF) detector. This approach consists of three main stages: preprocessing (image labelling), computer training, and hand angle detection based on the ACF detector. The training process consists of 25 stages. The static hand gesture dataset contained 569 images (361 for training and 208 for testing). The average time cost to detect all hand gesture angles was 0.9445 seconds, and all hand angles were recognized with 100% accuracy. This is a strong indication that supports our approach.
One of the main challenges in computer vision is to determine the number of different types, shapes, locations and color targets within the image plane for use in computer control systems. In this study, an algorithm introduced to detect the number of targets (one or two), their shapes (square or circle) and colors (red, green or blue). A new technique presented as a digital indexing code table to present the studied color targets images. The indexing table technique depends on decimal and binary numbers. In this study, there were 42 different cases represents all the input images. There is a special case considered for the similarity of input images in case it has the same shape and color, but a change in rotation and space between two image targets. This solved using referencing to indicate the same target in each case. Thus, the classification results were 100% for the presented algorithm for all input cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.