Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size − baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).
As the neural representation of visual information is initially coded in retinotopic coordinates, eye movements (saccades) pose a major problem for visual stability. If no visual information were maintained across saccades, retinotopic representations would have to be rebuilt after each saccade. It is currently strongly debated what kind of information (if any at all) is accumulated across saccades, and when this information becomes available after a saccade. Here, we use a motion illusion to examine the accumulation of visual information across saccades. In this illusion, an annulus with a random texture slowly rotates, and is then replaced with a second texture (motion transient). With increasing rotation durations, observers consistently perceive the transient as large rotational jumps in the direction opposite to rotation direction (backward jumps). We first show that accumulated motion information is updated spatiotopically across saccades. Then, we show that this accumulated information is readily available after a saccade, immediately biasing postsaccadic perception. The current findings suggest that presaccadic information is used to facilitate postsaccadic perception and are in support of a forward model of transsaccadic perception, aiming at anticipating the consequences of eye movements and operating within the narrow perisaccadic time window.
The experience of our visual surroundings appears continuous, contradicting the erratic nature of visual processing due to saccades. A possible way the visual system can construct a continuous experience is by integrating presaccadic and postsaccadic visual input. However, saccades rarely land exactly at the intended location. Feature integration would therefore need to be robust against variations in saccade execution to facilitate visual continuity. In the current study, observers reported a feature (color) of the saccade target, which occasionally changed slightly during the saccade. In transsaccadic change-trials, observers reported a mixture of the pre- and postsaccadic color, indicating transsaccadic feature integration. Saccade landing distance was not a significant predictor of the reported color. Next, to investigate the influence of more extreme deviations of saccade landing point on color reports, we used a global effect paradigm in a second experiment. In global effect trials, a distractor appeared together with the saccade target, causing most saccades to land in between the saccade target and the distractor. Strikingly, even when saccades land further away (up to 4°) from the saccade target than one would expect under single target conditions, there was no effect of saccade landing point on the reported color. We reason that saccade landing point does not affect feature integration, due to dissociation between the intended saccade target and the actual saccade landing point. Transsaccadic feature integration seems to be a mechanism that is dependent on visual spatial attention, and, as a result, is robust against variance in saccade landing point.
The ability to adaptively follow conspecific eye movements is crucial for establishing shared attention and survival. Indeed, in humans, interacting with the gaze direction of others causes the reflexive orienting of attention and the faster object detection of the signaled spatial location. The behavioral evidence of this phenomenon is called gaze-cueing. Although this effect can be conceived as automatic and reflexive, gaze-cueing is often susceptible to context. In fact, gaze-cueing was shown to interact with other factors that characterize facial stimulus, such as the kind of cue that induces attention orienting (i.e., gaze or non-symbolic cues) or the emotional expression conveyed by the gaze cues. Here, we address neuroimaging evidence, investigating the neural bases of gaze-cueing and the perception of gaze direction and how contextual factors interact with the gaze shift of attention. Evidence from neuroimaging, as well as the fields of non-invasive brain stimulation and neurologic patients, highlights the involvement of the amygdala and the superior temporal lobe (especially the superior temporal sulcus (STS)) in gaze perception. However, in this review, we also emphasized the discrepancies of the attempts to characterize the distinct functional roles of the regions in the processing of gaze. Finally, we conclude by presenting the notion of invariant representation and underline its value as a conceptual framework for the future characterization of the perceptual processing of gaze within the STS.
Low-level visual information across saccades 1 Low-level visual information is maintained across saccades, allowing for a postsaccadic hand-off between visual areas
Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.
In this experiment, we demonstrate modulation of the pupillary light response by spatial working memory (SWM). The pupillary light response has previously been shown to reflect the focus of covert attention, as demonstrated by smaller pupil sizes when a subject covertly attends a location on a bright background compared to a dark background. We took advantage of this modulation of the pupillary light response to measure the focus of attention during a SWM delay. Subjects performed two tasks in which a stimulus was presented in the periphery on either the bright or the dark half of a black and white display. Importantly, subjects had to remember the exact location of the stimulus in only one of the two tasks. We observed a modulation of pupil size by background luminance in the delay period, but only when subjects had to remember the exact location. We interpret this as evidence for a tight coupling between spatial attention and maintaining information in SWM. Interestingly, we observed particularly strong modulation of background luminance at the beginning and end of the delay, but not in between. This is suggestive of strategic guidance of spatial attention by the content of spatial working memory when it is task relevant. ARTICLE HISTORY
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.