Social behavior and many cultural etiquettes are influenced by gender. There are numerous potential applications of automatic face gender recognition such as human-computer interaction systems, content based image search, video surveillance and more. The immense increase of images that are uploaded online has fostered the construction of large labeled datasets. Recently, impressive progress has been demonstrated in the closely related task of face verification using deep convolutional neural networks. In this paper we explore the applicability of deep convolutional neural networks on gender classification by fine-tuning a pretrained neural network. In addition, we explore the performance of dropout support vector machines by training them on the deep features of the pretrained network as well as on the deep features of the fine-tuned network. We evaluate our methods on the color FERET data collection and the recently constructed Adience data collection. We report crossvalidated performance rates on each dataset. We further explore generalization capabilities of our approach by conducting crossdataset tests. It is demonstrated that our fine-tuning method exhibits state-of-the-art performance on both datasets.
The Single Sample per Person Problem is a challenging problem for face recognition algorithms. Patch-based methods have obtained some promising results for this problem. In this paper, we propose a new face recognition algorithm that is based on a combination of different histograms of oriented gradients (HOG) which we call Multi-HOG. Each member of Multi-HOG is a HOG patch that belongs to a grid structure. To recognize faces, we create a vector of distances computed by comparing train and test face images. After this, a distance calculation method is employed to calculate the final distance value between a test and a reference image. We describe here two distance calculation methods: mean of minimum distances (MMD) and a multi-layer perceptron based distance (MLPD) method. To cope with aligning difficulties, we also propose another technique that finds the most similar regions for two compared images. We call it the most similar region selection algorithm (MSRS). The regions found by MSRS are given to the algorithms we proposed. Our results show that, while MMD and MLPD contribute to obtaining much higher accuracies than the use of a single histogram of oriented gradients, combining them with the most similar region selection algorithm results in state-of-the-art performances.
In this paper we propose the use of several feature extraction methods, which have been shown before to perform well for object recognition, for recognizing handwritten characters. These methods are the histogram of oriented gradients (HOG), a bag of visual words using pixel intensity information (BOW), and a bag of visual words using extracted HOG features (HOG-BOW). These feature extraction algorithms are compared to other well-known techniques: principal component analysis, the discrete cosine transform, and the direct use of pixel intensities. The extracted features are given to three different types of support vector machines for classification, namely a linear SVM, an SVM with the RBF kernel, and a linear SVM using L2-regularization. We have evaluated the six different feature descriptors and three SVM classifiers on three different handwritten character datasets: Bangla, Odia and MNIST. The results show that the HOG-BOW, BOW and HOG method significantly outperform the other methods. The HOG-BOW method performs best with the L2-regularized SVM and obtains very high recognition accuracies on all three datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.