Beyond the resolution of front-back confusions, little is known about the mechanisms by which head movement enables listeners to perform a broad range of auditory scene analysis tasks. In this experiment, an attempt was made to look at one of these tasks: how well listeners can track a source's azimuth location by head movement. A three-talker paradigm was utilized in a headphone-based head-tracked virtual environment spatialized with head-related transfer functions (HRTFs). The task involves a target that moves from trial to trial with two stationary interferers (male target, two female interferers, all speaking phonetically-balanced sentences). The listener is prompted to move her head such that she believes she is facing the target. The two independent conditions are three levels of reverberation (anechoic, low reverberance (recording room), and high reverberance (parking garage)) and seven target azimuth angles (from +90 to-90 degrees in steps of 30 degrees), while the measured responses include facing accuracy in terms of deviation from the target and number of times the playback button is hit before each trial is completed (to indicate task difficulty). Results are discussed within the context of both room acoustics and perception.