We present a fully automatic system for face recognition in databases with only a small number of samples (even single sample) for each individual. In this paper, the shape localization problem is formulated in the Bayesian framework. In the learning stage, the RankBoost approach is introduced to model the likelihood of local features associated with the fiducial point, while preserving the prior ranking order between the ground truth position and its neighbors; in the inferring stage, a simple efficient iterative algorithm is proposed to uncover the MAP shape by locally modeling the likelihood distribution around each fiducial point. Based on the accurately located fiducial points, two popular mutual enhancing texture features are automatically extracted and integrated for human face representation: global texture features, which are the normalized shape-free gray-level values enclosed in the mean shape, and local texture features, which are represented by the Gabor wavelets extracted at the fiducial points (eye corners and mouth, etc.). Global texture mainly encodes the low-frequency information of a face, while local texture encodes the local high-frequency components. The extensive experiments illustrate that our proposed shape localization approach significantly improved the shape location accuracy, robustness, and face recognition rate; moreover, the experiments conducted on FERET and Yale databases show that our algorithm outperformed the classical eigenfaces, fisherfaces, as well as other approaches utilizing the shape, global, and local textures.