Perceptual processes underlying individual differences in face-recognition ability remain poorly understood. We compared visual sampling of 37 adult super-recognizers—individuals with superior face-recognition ability—with that of 68 typical adult viewers by measuring gaze position as they learned and recognized unfamiliar faces. In both phases, participants viewed faces through “spotlight” apertures that varied in size, with face information restricted in real time around their point of fixation. We found higher accuracy in super-recognizers at all aperture sizes—showing that their superiority does not rely on global sampling of face information but is also evident when they are forced to adopt piecemeal sampling. Additionally, super-recognizers made more fixations, focused less on eye region, and distributed their gaze more than typical viewers. These differences were most apparent when learning faces and were consistent with trends we observed across the broader ability spectrum, suggesting that they are reflective of factors that vary dimensionally in the broader population.
In the last 20 years, there has been increasing interest in studying visual attentional processes under more natural conditions. In the present study, we propose to determine the critical age at which children show similar to adult performance and attentional control in a visually guided task; in a naturalistic dynamic and socially relevant context: road crossing. We monitored visual exploration and crossing decisions in adults and children aged between 5 and 15 while they watched road traffic videos containing a range of traffic densities with or without pedestrians. 5–10 year old (y/o) children showed less systematic gaze patterns. More specifically, adults and 11–15 y/o children look mainly at the vehicles’ appearing point, which is an optimal location to sample diagnostic information for the task. In contrast, 5–10 y/os look more at socially relevant stimuli and attend to moving vehicles further down the trajectory when the traffic density is high. Critically, 5-10 y/o children also make an increased number of crossing decisions compared to 11–15 y/os and adults. Our findings reveal a critical shift around 10 y/o in attentional control and crossing decisions in a road crossing task.
Exploring potential mechanisms underpinning the therapeutic effects of surfing" Get in and wrestle with the sea; wing your heels with the skill and power that reside in you, hit the sea's breakers, master them, and ride upon their backs as a king should."Jack LondonFrom its roots in ancient Polynesian and Hawaiian cultures, the twentieth century saw surfing spread around the world to become a popular sport and leisure activity (Finney & Houston, 1996). Despite the popularity of surfing, there has been little scientific research to date investigating motivations for, and benefits from, surfing. However, several cross-sectional studies have found depression and anxiety to be lower in surfers (Amrhein, Barkhoff, & Heiby, 2016;Levin & Taylor, 2011) and surfing has been found to provide a sense of respite from symptoms of trauma in combat veterans (Caddick, Phoenix, & Smith, 2015a;Caddick, Smith, & Phoenix, 2015b). Furthermore, a small but growing number of studies have also investigated the effects of surfing-based mental health interventions. Although there is variation in specific design, surf therapy programs typically involve group-based surfing instruction, and can contain elements of psychoeducation, self-care and wellbeing discussions, creating a safe-space, socialization, and community and rapport building.Although still largely preliminary, the results are promising. For instance, studies have found decreases in anxiety and depression in veterans as a result of these programs (Rogers, Mallinson, & Peppers, 2014;Walter et al., 2019). Surf therapy programs for at-risk youth and youth with disabilities have also reported various benefits such as improvements in behavior, social skills, self-esteem, emotion regulation, and psychological well-being (Cavanaugh & Rademacher,
Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes.
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the coregistration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a steady-state evoked potentials paradigm.When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the Steady State Visual Evoked Potentials elicited by the tracked car at the 30 Hz frequency band were significantly lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may reduce visual processing of objects even when they are directly being tracked with the eyes. Keywords:SSVEP EEG Attention Frequency tagging Smooth pursuitIn pursuit of visual attention
Visual object recognition is a highly dynamic process by which we extract meaningful information about the things we see. However, the functional relevance and informational properties of feedforward and feedback signals remains largely unspecified. Additionally, it remains unclear whether computational models of vision alone can accurately capture object-specific representations and the evolving spatiotemporal neural dynamics. Here, we probe these dynamics using a combination of representational similarity and connectivity analyses of fMRI and MEG data recorded during the recognition of familiar, unambiguous objects from a wide range of categories. Modelling the visual and semantic properties of our stimuli using an artificial neural network as well as a semantic feature model, we find that unique aspects of the neural architecture and connectivity dynamics relate to visual and semantic object properties. Critically, we show that recurrent processing between anterior and posterior ventral temporal cortex relates to higher-level visual properties prior to semantic object properties, in addition to semantic-related feedback from the frontal lobe to the ventral temporal lobe between 250 and 500ms after stimulus onset. These results demonstrate the distinct contributions semantic object properties make in explaining neural activity and connectivity, highlighting it is a core part of object recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.