In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye.
Sensory information from the external world is inherently ambiguous, necessitating prior experience as a constraint on perception. Prolonged experience (adaptation) induces perception of ambiguous morph faces as a category different from the adapted category, suggesting sensitivity in underlying neural codes to differences between input and recent experience. Using magnetoencephalography, we investigated the neural dynamics of such experience-dependent visual coding by focusing on the timing of responses to morphs after facial expression adaptation. We show that evoked fields arising from the superior temporal sulcus (STS) reflect the degree to which a morph and adapted expression deviate. Furthermore, adaptation effects within STS predict the magnitude of behavioral aftereffects. These findings show that the STS codes expressions relative to recent experience rather than absolutely and may bias perception of expressions. One potential neural mechanism for the late timing of both effects appeals to hierarchical models that ascribe a central role to backward connections in mediating predictive codes.hierarchical bayes ͉ magnetoencephalography ͉ predictive coding ͉ top-down processing ͉ visual evoked fields Despite ambiguity about what causes retinal images, human vision overcomes uncertainty and ambiguity to parse stimuli into well defined categories, such as identity and expression in faces, necessary to support complex social interactions. A compelling demonstration comes from adaptation aftereffects, where prolonged exposure to a stimulus (adaptation) biases subjects to perceive stimuli as dissimilar to adapting stimuli. Thus, visual perception is not veridical but uses previous experience as a referent, often described as a norm (1-4). High-level aftereffects have been shown for face categories, including identities (2, 4), races, genders, and expressions (5). Also, recent studies suggest that the visual system emphasizes the deviation between individual and average (norm) faces (3, 6). Face adaptation therefore provides a powerful paradigm for investigating the role of experience in visual coding.A number of neural mechanisms might mediate these perceptual influences of experience. Relatively simple models propose that adaptation desensitizes feedforward neural pathways specialized for detecting the adapted pattern (2, 7), perhaps by fatigue (8), which allows competing feature detectors to influence perception. Adaptation can also be viewed from the standpoint of hierarchical predictive coding models (8-12), which characterize visual representations as a ''prediction error,'' reflecting the deviation between bottom-up stimulus patterns and top-down expectations tuned by previous experience. When environmental contingencies change, as exemplified by adaptation, these predictions are dynamically recalibrated (i.e., sensory and perceptual learning). An important feature of predictive coding models is that predictive codes are mediated by backward connections from more advanced areas (8-13). Both types of model ...
These data provide evidence for the possibility to identify CHR participants through population-based web screening. This could be an important strategy for early intervention and diagnosis of psychotic disorders.
The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory of computation are now deeply rooted. Modern neuroscience aims to model the brain as a network of densely interconnected functional nodes. However, to model the dynamic information processing mechanisms of perception and cognition, it is imperative to understand brain networks at an algorithmic level–i.e. as the information flow that network nodes code and communicate. Here, using innovative methods (Directed Feature Information), we reconstructed examples of possible algorithmic brain networks that code and communicate the specific features underlying two distinct perceptions of the same ambiguous picture. In each observer, we identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition.
To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g., the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g., the wide opened eyes in ‘fear’; the detailed mouth in ‘happy’). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300.
A key to understanding visual cognition is to determine “where”, “when”, and “how” brain responses reflect the processing of the specific visual features that modulate categorization behavior—the “what”. The N170 is the earliest Event-Related Potential (ERP) that preferentially responds to faces. Here, we demonstrate that a paradigmatic shift is necessary to interpret the N170 as the product of an information processing network that dynamically codes and transfers face features across hemispheres, rather than as a local stimulus-driven event. Reverse-correlation methods coupled with information-theoretic analyses revealed that visibility of the eyes influences face detection behavior. The N170 initially reflects coding of the behaviorally relevant eye contralateral to the sensor, followed by a causal communication of the other eye from the other hemisphere. These findings demonstrate that the deceptively simple N170 ERP hides a complex network information processing mechanism involving initial coding and subsequent cross-hemispheric transfer of visual features.
Current models propose that the brain uses a multi-layered architecture to reduce the high dimensional visual input to lower dimensional representations that support face, object and scene categorizations. However, understanding the brain mechanisms that support such information reduction for behavior remains challenging. We addressed the challenge using a novel information theoretic framework that quantifies the relationships between three key variables: single-trial information randomly sampled from an ambiguous scene, source-space MEG responses and perceptual decision behaviors. In each observer, behavioral analysis revealed the scene features that subtend their decisions. Independent source space analyses revealed the flow of these and other features in cortical activity. We show where (at the junction between occipital cortex and ventral regions), when (up until 170 ms post stimulus) and how (by separating task-relevant and irrelevant features) brain regions reduce the high-dimensional scene to construct task-relevant feature representations in the right fusiform gyrus that support decisions. Our results inform the occipito-temporal pathway mechanisms that reduce and select information to produce behavior.. CC-BY-NC-ND 4.0 International license It is made available under a (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.The copyright holder for this preprint . http://dx.doi.org/10.1101/284158 doi: bioRxiv preprint first posted online Mar. 18, 2018; 3 Over the past decade, there has been extensive studies of the regions of the brain that support face, object and scene recognition using different methodologies and modalities of brain measurements (e.g. electrophysiology, E/MEG and fMRI), including across species. A converging set of results now suggests a hierarchically organized architecture of brain regions, spanning the occipital and temporal lobes (1-17), where categorizations unfold over the first few hundred milliseconds of post stimulus processing (18)(19)(20)(21)(22). This same architecture is flexibly involved in multiple categorization tasks that require multiple representational bases. With extensive knowledge of the where and when, the next challenge is to unravel the thorny how.That is, how detailed information processing mechanisms in the occipitoventral pathway dynamically implement flexible visual categorization by selecting, from the high-dimensional input, the low-dimensional information basis required for behavior?In other words, how does our brain extract, from the features of the visual scene that it initially represents, those that are actually useful for the categorization task?To start addressing this considerable challenge, we used Dali's ambiguous painting Slave Market with Disappearing Bust of Voltaire (see Figure1, Stimulus) as a case-study stimulus because it is a complex visual scene that affords two distinct perceptions. We applied the Bubbles technique (Gosselin & Schyns, 20001) to character...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.