Abstract-Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other's thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.
Recent years have witnessed renewed interest in developing skin segmentation approaches. Skin feature segmentation has been widely employed in different aspects of computer vision applications including face detection and hand gestures recognition systems. This is mostly due to the attractive characteristics of skin colour and its effectiveness to object segmentation. On the contrary, there are certain challenges in using human skin colour as a feature to segment dynamic hand gesture, due to various illumination conditions, complicated environment, and computation time or real-time method. These challenges have led to the insufficiency of many of the skin color segmentation approaches. Therefore, to produce simple, effective, and cost efficient skin segmentation, this paper has proposed a skin segmentation scheme. This scheme includes two procedures for calculating generic threshold ranges in Cb-Cr colour space. The first procedure uses threshold values trained online from nose pixels of the face region. Meanwhile, the second procedure known as the offline training procedure uses thresholds trained out of skin samples and weighted equation. The experimental results showed that the proposed scheme achieved good performance in terms of efficiency and computation time.
Abstract-Computer facial animation is not a new endeavour as it had been introduced since 1970s. However, animating human face still presents interesting challenges because of its familiarity as the face is the part used to recognize individuals. Facial modelling and facial animation are important in developing realistic computer facial animation. Both modelling and animation is dependent to drive the animation. This paper reviews several geometric-based modelling (shape interpolation, parameterization and muscle-based animation) and data-driven animation (image-based techniques speech-driven techniques and performance-driven animation) techniques used in computer graphics and vision for facial animation. The main concepts and problems for each technique are highlighted in the paper.Index Terms-Computer graphics, data-driven animation, facial animation, geometric-based modelling. I. INTRODUCTIONComputer facial animation is experiencing a continuous and rapid growth since the pioneering work of Frederick I. Parke [1] in 1972. This is partly due to the increasing demand of virtual characters or avatars in the field of gaming, filming, human computer interaction and human machine communication.The human face is one of the channels in expressing the affective states. It has complex but flexible three-dimensional (3D) surfaces. To present the human face onto a computer system is a challenging task due to several goals to be achieved. According to Deng and Noh [2], the main goal in facial modelling and animation research is to develop a high adaptability system that creates realistic animation in real time mode where decreasing manual handling process. Here, high adaptability refers to the system that is easily adapted to any individuals" face.Various approaches have been proposed by the research community to improve the performance of facial animation in different aspects. Some claims that a good facial modelling and animation involving how to lip synchronization and how to use the eyes, brows and lids for expression [3]. Others believed that a facial model should include other attributes such as surface colours and textures [4].The focus of this paper is to present a review of the past and recent advancements in facial animation research. Besides, an overview of the facial animation framework is discussed to give a picture of where the modelling and animation techniques take place. The goal of this review is to improve the current computer facial animation system with the knowledge of the current advancement in this technology. II. A GENERAL FACIAL ANIMATION FRAMEWORKAnimating facial action is a tedious and complicated task. The construction of facial animation proceeds in several stages and is shown in Fig. 1. Each facial animation starts from the head model creation. The facial geometry is created from the input data either captured from images or motions. The input data are pre-processed and filtered, only the corresponding data is used to create the basis of morphable meshes. Next, geometric-based modelling is a...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.