This study developed a facial sketch synthesis system based on the two-dimensional direct combined model (2DDCM) approach that employs a collection of photo/sketch pairwise training samples. The proposed synthesis framework addresses the following key issues. First, we directly combine each photo and sketch pairwise sample in a concatenated form in order to completely preserve their relationship. Second, photo and sketch images are formed as two-dimensional matrices instead of vectors in order to preserve the facial geometry. Third, both the global facial geometry and the local detailed textures are included in the proposed synthesis framework. Experiments demonstrate our approach can synthesize high quality reconstructed facial sketches from given unseen photos.