This article presents new methods for realistic and fast 3D facial expression synthesis based on an individualized face model constructed from the anatomical perspective. The face model has a 3D structure of anatomically motivated facial skin, muscles, and skull. We start from a highly accurate facial mesh reconstructed from the individual face measurements. By taking as input a collection of reflectance images covering the face, the view-based texture blending method automatically generates a comprehensive texture map for photorealistic rendering. Based on the reconstructed facial mesh, a deformable multilayer skin model is developed to simulate the dynamic behavior of the skin by taking into account the nonlinear stress-strain relationship and incompressibility of the skin. We develop a muscle modeling process that includes three kinds of forcebased facial muscle model to simulate facial muscle contraction. The 3D face model incorporates a skull structure which extends the scope of facial motion, facilitates facial muscle construction and constrains skin deformation. Based on the facial action coding system, various facial expressions are synthesized by the contraction of a set of facial muscles. Lagrangian mechanics governs the dynamics, dictating the deformation of facial skin in response to the contraction of muscles. For computational efficiency, we devise an adaptive computation algorithm for the numerical simulation. It uses either a semi-implicit integration scheme or a quasi-static solver to compute the relaxation of the facial mesh by traversing the designed data structures in a breadth-first order. The resulting system enables us to construct realistic face models of living humans and to synthesize facial expressions in real time.