In making sense of the visual world, the brain's processing is driven by two factors: the physical information provided by the eyes ("bottom-up" data) and the expectancies driven by past experience ("top-down" influences). We use degraded stimuli to tease apart the effects of bottom-up and top-down processes because they are easier to recognize with prior knowledge of undegraded images. Using machine learning algorithms, we quantify the amount of information that brain regions contain about stimuli as the subject learns the coherent images. Our results show that several distinct regions, including high-level visual areas and the retinotopic cortex, contain more information about degraded stimuli with prior knowledge. Critically, these regions are separate from those that exhibit classical priming, indicating that top-down influences are more than feature-based attention. Together, our results show how the neural processing of complex imagery is rapidly influenced by fleeting experiences.t what stage of visual processing does bottom-up information combine with top-down expectations to yield the eventual percept? This question lies at the heart of a mechanistic understanding of feed-forward/feed-back interactions, as they are implemented in the brain and as they might be instantiated by computational visual systems. Furthermore, this question is of central significance not only for vision, but also for all sensory modalities because the combination of current and prior data is ubiquitous as a processing principle.A compelling demonstration of the role of prior experience is obtained with images so degraded that they are initially perceived as devoid of meaning. However, after being shown their coherent versions, observers are readily able to parse the previously uninterpretable image. The well-known Dalmatian dog picture (1)-a black-and-white thresholded photograph-and the Mooney images (2) are classic examples of this phenomenon. Other examples of top-down knowledge facilitating sensory processing include phonemic restoration (3) and the interaction between depth perception and object recognition (4).The approach of comparing neural responses to degraded images before and after exposure to the fully coherent image has been used by several research groups to identify the correlates of top-down processing. For example, PET scans of brain activity elicited by Mooney images before and after disambiguation show that regions of the inferior temporal cortex, as well as the medial and lateral parietal regions, exhibit greater activity in response to recognized images (5). Progressive revealing paradigms, where an image gradually increases in coherence, elicit increased and accelerated functional magnetic resonance imaging (fMRI) activation in several regions, including the fusiform gyrus and the peristriate cortex, when subjects have prior experience with the images (6). In addition, EEG correlates show that distorted or schematic line drawings elicit face-specific N170 event-related potential components, which are belie...