Recently, 3D face recognition algorithms have outperformed 2D conventional approaches by adding depth data to the problem. However, independently of the nature (2D or 3D) of the approach, the majority of them required the same data format in the test stage than the data used for training the system. This issue represents the main drawback of 3D face research since 3D data should be acquired under highly controlled conditions and in most cases require the collaboration of the subject to be recognized. Thus, in real world applications (control access points or surveillance) this kind of 3D data may not be available during the recognition process. This leads to a new paradigm using some mixed 2D-3D face recognition systems where 3D data is used in the training but either 2D or 3D information can be used in the recognition depending on the scenario.
PARTIAL INFORMATION CONCEPTThe performance of face recognition systems that use 2D intensity images depends highly on the conditions during the acquisition of the image, e.g. pose of the face, illumination, or facial expression. Since a face is a 3D object, new face recognition techniques have tried to add shape or depth information to make the system more robust towards pose and lighting variations. Additionally, 3D data acquisition is becoming faster and cheaper by means of special 3D scanner devices or multi-camera systems [2]. Therefore, 3D face recognition research is getting more and more important [3,4,5,6,7]. These 3D algorithms can be roughly divided in two categories: The approaches of the first group basically compute a depth and intensity map separately and then they perform a conventional 2D method to each modality and combine them as two different expert opinions [4,7]. The second category encloses model-based approaches that use complete 3D models of a face to perform the recognition [3,5,6]. The advantage of the first category is that it adds depth information to conventional approaches without increasing too much the computational cost; but, on the other hand, most of them are not true 3D approaches and they should be called 2.5D techniques since they may not have multi-view information. Furthermore, the input of the recognition stage of these approaches should maintain the same data format as the training images, i.e. if frontal views have been used during the training stage then a depth and/or intensity frontal image may be required in the recognition stage [4]. On the contrary, the majority of the model-based 3D face approaches intend to fit texture images on some 3D models. After this adjustment, they extract some relevant features, in most of the cases geometrical parameters, that will be used in the recognition stage. In this case, the input images for the recognition phase could be common 2D intensity images which can be available in any kind of application either under controlled or uncontrolled acquisition conditions. However, the process of fitting an intensity image on a generic model is very computationally demanding and not a very p...