An image is worth a thousand words; hence, a face image illustrates extensive details about the specification, gender, age, and emotional states of mind. Facial expressions play an important role in community-based interactions and are often used in the behavioral analysis of emotions. Recognition of automatic facial expressions from a facial image is a challenging task in the computer vision community and admits a large set of applications, such as driver safety, human-computer interactions, health care, behavioral science, video conferencing, cognitive science, and others. In this work, a deep-learning-based scheme is proposed for identifying the facial expression of a person. The proposed method consists of two parts. The former one finds out local features from face images using a local gravitational force descriptor, while, in the latter part, the descriptor is fed into a novel deep convolution neural network (DCNN) model. The proposed DCNN has two branches. The first branch explores geometric features, such as edges, curves, and lines, whereas holistic features are extracted by the second branch. Finally, the scorelevel fusion technique is adopted to compute the final classification score. The proposed method along with 25 state-of-the-art methods is implemented on five benchmark available databases,
This work was supported in part by the project (Prediction of diseases through computer assisted diagnosis system using images captured by minimally-invasive and non-invasive modalities),
Thermal infrared (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature
changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal
spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet
transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence
matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band.
For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of
size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed
separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed
forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on
the database created at our own laboratory and Terravic Facial IR Database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.