Three types of looming-selective neurons have been found in the nucleus rotundus of pigeons, each computing a different optical variable related to image expansion of objects approaching on a direct collision course with the bird. None of these neurons respond to simulated approach toward stationary objects. A detailed analysis of these neurons' firing pattern to approaching objects of different sizes and velocities shows that one group of neurons signals relative rate of expansion tau (tau), a second group signals absolute rate of expansion rho (rho), and a third group signals yet another optical variable eta (eta). The rho parameter is required for the computation of both tau and eta, whose respective ecological functions probably provide precise 'time-to-collision' information and 'early warning' detection for large approaching objects.
Microsaccades (MSs) are small eye movements that occur during attempted visual fixation. While most studies concerning MSs focus on their roles in visual processing, some also suggest that the MS rate can be modulated by the amount of mental exertion involved in nonvisual processing. The current study focused on the effects of task difficulty on MS rate in a nonvisual mental arithmetic task. Experiment 1 revealed a general inverse relationship between MS rate and subjective task difficulty. During Experiment 2, three task phases with different requirements were identified: during calculation (between stimulus presentation and response), postcalculation (after reporting an answer), and a control condition (undergoing a matching sequence of events without the need to make a calculation). MS rate was observed to approximately double from the during-calculation phase to the postcalculation phase, and was significantly higher in the control condition compared to postcalculation. Only during calculation was the MS rate generally decreased with greater task difficulty. Our results suggest that the nonvisual cognitive processing can suppress MS rate, and that the extent of such suppression is related to the task difficulty.
The repeated configurations of random elements induce a better search performance than that of the displays of novel random configurations. The mechanism of such contextual cueing effect has been investigated through the use of the RT × Set Size function. There are divergent views on whether the contextual cueing effect is driven by attentional guidance or facilitation of initial perceptual processing or response selection. To explore this question, we used eye movement recording in this study, which offers information about the substages of the search task. The results suggest that the contextual cueing effect is contributed mainly by attentional guidance, and facilitation of response selection also plays a role.
By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 180 before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demonstrated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an 'under-perception' of movement relative to conditions in which visual information was absent during locomotion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.