In recent years, facial recognition has been a major problem in the field of computer vision, which has attracted lots of interest in previous years because of its use in different applications by different domains and image analysis. Which is based on the extraction of facial descriptors, it is a very important step in facial recognition. In this article, we compared robust methods (SIFT, PCA-SIFT, ASIFT and SURF) to extract relevant facial information with different facial posture variations (open and unopened mouth, glasses and no glasses, open and closed eyes). The simulation results show that the detector (SURF) is better than others at finding the similarity descriptor and calculation time. Our method is based on the normalization of vector descriptors and combined with the RANSAC algorithm to cancel outliers in order to calculate the Hessian matrix with the objective of reducing the calculation time. To validate our experience, we tested four facial images databases containing several modifications. The results of the simulation show that our method is more efficient than other detectors in terms of speed of recognition and determination of similar points between two images of the same face, one belonging to the base of the text and the other one to the base driven by different modifications. This method, which can be applied on a mobile platform to analyze the content of simple images, for example, to detect driver fatigue, human-machine interaction, human-robot. Using descriptors with properties important for good accuracy and real-time response.
High-dose chemotherapy and autologous hematopoietic stem-cell transplantation.
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation. The relational view can be obtained from the fundamental matrix. In this way, we interest to calculate an exact matrix based from characteristics unequally distributed in complex scene images. This paper presents a method based on the detection of points by the Harris detector, after we develop a new modification of the multi-level function related to the M-estimator algorithm. The experimental comparisons were conducted by a simulation between RANSAC, LMeds, and M-Estimator in order to estimate the projection error. As a result, the proposed method gives a significant improvement and performance with a low projection error compared to other methods.
The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least-squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting, and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation show this technique of use in real-time applications
Facial recognition technology has been used in many fields such as security, biometric identification, robotics, video surveillance, health, and commerce due to its ease of implementation and minimal data processing time. However, this technology is influenced by the presence of variations such as pose, lighting, or occlusion. In this paper, we propose a new approach to improve the accuracy rate of face recognition in the presence of variation or occlusion, by combining feature extraction with a histogram of oriented gradient (HOG), scale invariant feature transform (SIFT), Gabor, and the Canny contour detector techniques, as well as a convolutional neural network (CNN) architecture, tested with several combinations of the activation function used (Softmax and Segmoïd) and the optimization algorithm used during training (adam, Adamax, RMSprop, and stochastic gradient descent (SGD)). For this, a preprocessing was performed on two databases of our database of faces (ORL) and Sheffield faces used, then we perform a feature extraction operation with the mentioned techniques and then pass them to our used CNN architecture. The results of our simulations show a high performance of the SIFT+CNN combination, in the case of the presence of variations with an accuracy rate up to 100%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.