Facial expression recognition (FER) has become one of the most important fields of research in pattern recognition. In this paper, we propose a method for the identification of facial expressions of people through their emotions. Being robust against illumination changes, this method combines four steps: Viola–Jones face detection algorithm, facial image enhancement using contrast limited adaptive histogram equalization (CLAHE) algorithm, the discrete wavelet transform (DWT), and deep convolutional neural network (CNN). We have used Viola–Jones to locate the face and facial parts; the facial image is enhanced using CLAHE; then facial features extraction is done using DWT; and finally, the extracted features are used directly to train the CNN network, for the purpose of classifying the facial expressions. Our experimental work was performed on the CK+ database and JAFFE face database. The results obtained using this network were 96.46% and 98.43%, respectively.
<p><span>In the last decade, facial recognition techniques are considered the most important fields of research in biometric technology. In this research paper, we present a Face Recognition (FR) system divided into three steps: The Viola-Jones face detection algorithm, facial image enhancement using Modified Contrast Limited Adaptive Histogram Equalization algorithm (M-CLAHE), and feature learning for classification. For learning the features followed by classification we used VGG16, ResNet50 and Inception-v3 Convolutional Neural Networks (CNN) architectures for the proposed system. Our experimental work was performed on the Extended Yale B database and CMU PIE face database. Finally, the comparison with the other methods on both databases shows the robustness and effectiveness of the proposed approach. Where the Inception-v3 architecture has achieved a rate of 99, 44% and 99, 89% respectively.</span></p>
As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested images. Our algorithm provides very important PSNR and MSSIM values for MRI images
Recent developments on medical imaging techniques have brought a completely new research field on image processing. The principal aim is to improve medical diagnosis through segmented images. Techniques have been developed to help for identifying specific structures within a magnetic resonance image: MRI. The Active Contour methods, these methods are adaptable to the desired features in the image. In our work, we describe two classes of active contour models and discussing application aspects in medical imaging area.
<span>In biometric systems, compression takes important place especially in order to reduce the size of the information stored or transmitted through the distributed biometric systems. It is also noted that the compression techniques induce loss of information in the compressed images that can affect the effectiveness of biometric systems. The main objective of our contribution is to examine the efficacy of the used method to offer an optimal compression quality in these kind of images without considerable distortion. In order to evaluate the efficacy of the compression process, we use two kinds of evaluation, full-reference image quality assessment and a new proposed textural quality analysis of the compressed images. In this paper, we use a second-generation wavelet transform to improve the compression study in biometric images. The basic idea of this algorithm is the quincunx wavelet transform coupled to a modified progressive encoder called SPIHT-Z encoding.</span>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.