Electroencephalography (EEG) was the first of the noninvasive brain measures in neuroscience. Technical advances over the last 100 years or so have rendered EEG a true brain imaging technique. Here, we provide an accessible primer on the biophysics of EEG, on measurement aspects, and on the analysis of EEG data. We use the example of event-related potentials (ERPs), although the issues apply equally to other varieties of EEG signals, and provide an overview of analytic methods at the base of the so-called electrical neuroimaging framework. We detail the interpretational strengths of electrical neuroimaging for organizational researchers and describe some domains of ongoing technical developments. We likewise emphasize practical considerations with the use of EEG in more real-world settings. This primer is intended to provide organizational researchers specifically, and novices more generally, an access point to understanding how EEG may be applied in their research.
In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top–down audiovisual object representations (“attentional templates”) while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via “the N2pc component”: spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150–300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top–down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170–270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top–down object representations, and so not only by separate sensory-specific top–down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top–down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.
Everyday vision includes the detection of stimuli, figure-ground segregation, as well as object localization and recognition. Such processes must often surmount impoverished or noisy conditions; borders are perceived despite occlusion or absent contrast gradients. These illusory contours (ICs) are an example of so-called mid-level vision, with an event-related potential (ERP) correlate at ∼100-150 ms post-stimulus onset and originating within lateral-occipital cortices (the IC). Presently, visual completion processes supporting IC perception are considered exclusively visual; any influence from other sensory modalities is currently unknown. It is now well-established that multisensory processes can influence both low-level vision (e.g. detection) as well as higher-level object recognition. By contrast, it is unknown if mid-level vision exhibits multisensory benefits and, if so, through what mechanisms. We hypothesized that sounds would impact the IC. We recorded 128-channel ERPs from 17 healthy, sighted participants who viewed ICs or no-contour (NC) counterparts either in the presence or absence of task-irrelevant sounds. The IC was enhanced by sounds and resulted in the recruitment of a distinct configuration of active brain areas over the 70-170 ms post-stimulus period. IC-related source-level activity within the lateral occipital cortex (LOC), inferior parietal lobe (IPL), as well as primary visual cortex (V1) were enhanced by sounds. Moreover, the activity in these regions was correlated when sounds were present, but not when absent. Results from a control experiment, which employed amodal variants of the stimuli, suggested that sounds impact the perceived brightness of the IC rather than shape formation per se. We provide the first demonstration that multisensory processes augment mid-level vision and everyday visual completion processes, and that one of the mechanisms is brightness enhancement. These results have important implications for the design of treatments and/or visual aids for low-vision patients.
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
The human brain has the astonishing capacity of integrating streams of sensory information from the environment and forming predictions about future events in an automatic way. Despite being initially developed for visual processing, the bulk of predictive coding research has subsequently focused on auditory processing, with the famous mismatch negativity signal as possibly the most studied signature of a surprise or prediction error (PE) signal. Auditory PEs are present during various consciousness states. Intriguingly, their presence and characteristics have been linked with residual levels of consciousness and return of awareness. In this review we first give an overview of the neural substrates of predictive processes in the auditory modality and their relation to consciousness. Then, we focus on different states of consciousness - wakefulness, sleep, anesthesia, coma, meditation, and hypnosis - and on what mysteries predictive processing has been able to disclose about brain functioning in such states. We review studies investigating how the neural signatures of auditory predictions are modulated by states of reduced or lacking consciousness. As a future outlook, we propose the combination of electrophysiological and computational techniques that will allow investigation of which facets of sensory predictive processes are maintained when consciousness fades away.
Highlights By age 7, children show adult-like task-set contingent attentional capture in behavior (top-down visual attentional control). Children showed no behavioral evidence for multisensory enhancement of attention capture by visual objects paired with sounds. But 9-year-olds adult-like EEG topographic patterns, differing when elicited by multisensory vs. purely visual distractors. Traditional N2pc analyses showed no N2pc component in any of the children groups, and no multisensory modulations in adults. Electrical neuroimaging of well-known ERP components is more sensitive to developmental change in neurocognitive processes.
Traditional research on attentional control has largely focused on single senses and the importance of one s behavioural goals in controlling attentional selection, thus limiting its generalizability to real-world contexts. These contexts are both inherently multisensory and contain regularities that also contribute to attentional control. To get a better understanding of how attention is controlled in the real world, we investigated how visual attentional capture was impacted by top-down goals (indexed by task-set contingent attentional capture) and the multisensory nature of stimuli, as well as top-down contextual factors such as semantic relationships and temporal predictability of stimulus onset. Participants performed a multisensory version of Folk et al. (1992) spatial cueing paradigm, while their 129-channel event-related potentials (ERPs) were recorded. Reaction-time spatial cueing served as a behavioural measure of attentional control, while the N2pc ERP component was analysed both canonically and using a multivariate electrical neuroimaging (EN) framework. Behaviourally, target-congruent colour distractors captured attention more strongly when they were simultaneous than semantically congruent (nontarget-congruent colour distractors failed to capture attention), with no behavioural evidence for context modulating multisensory enhancements of capture. However, our EN analyses revealed context-based influences on attention to both visual and multisensory distractors, on how strongly they activated brain networks and in the type of activated brain networks. In both cases, these context-driven brain response modulations occurred early on (long before the traditional N2pc time-window), with network-based modulations at app. 30ms post-distractor, followed by strength-based modulations at app. 100ms post-distractor. Our findings revealed that in naturalistic settings, meaning, next to predictions (spatial, temporal etc.) might be a second important source of contextual information utilised to facilitate goal-directed attention. Therein, attentional selection is controlled by an interaction of one s goals, stimulus perceptual (multisensory-driven) salience and an interaction of stimulus meaning and its predictability. Our study demonstrates how investigating more traditional, lab-studied control mechanisms and processes more typical for everyday life reveals a complex interplay between goal-, stimulus- and context-based processes in attentional control. As such, our findings call for a revision of traditional models of visual attentional control to account for the role of both contextual and multisensory control mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.