Until the day we can record from multiple neurons in undergraduates, understanding how humans process faces requires an interdisciplinary approach, including building computational models that mimic how the brain processes faces.Using machine learning techniques, we can often build models that perform the same tasks people do, in neurophysiologically plausible ways. These models can then be manipulated and analyzed in ways that people cannot, providing insights that are unavailable from behavioral experiments. For example, as we will see below, our model of perceptual expertise can be "raised" in an environment where its "parents" are cups or cans instead of faces, and the same kind of processing ensues. This demonstrates, at least from our point of view, that there is nothing special about faces as an object class per se; rather, it is what we have to do with them -fine level discrimination of a homogeneous class -that is special.In this chapter, we will delineate two dimensions along which computational models of face (and object) processing may vary, and briefly review three such models (Dailey and Cottrell, 1999;O'Reilly and Munakata, 2000;Riesenhuber and Poggio, 1999). Subsequently, we will focus primarily on the model we are most familiar with (our own!) and how this model has been used to reveal potential mechanisms underlying the neural processing of faces and objects -the development of a specialized face processor, how it could be recruited for other domains, hemispheric lateralization of face processing, facial expression processing, and the development of face discrimination. At the end, we return to the Riesenhuber and Poggio model to describe the elegant way it has been used to predict fMRI data on face processing. The overall strategy of these modeling efforts is