When attempting to understand where people look during scene perception, researchers typically focus on the relative contributions of low-and high-level cues. Computational models of the contribution of low-level features to fixation selection, with modifications to incorporate top-down sources of information have been abundant in recent research. However, we are still some way from a model that can explain many of the complexities of eye movement behaviour. Here we show that understanding biases in how we move the eyes can provide powerful new insights into the decision about where to look in complex scenes. A model based solely on these biases and therefore blind to current visual information outperformed popular salience-based approaches. Our data show that incorporating an understanding of oculomotor behavioural biases into models of eye guidance is likely to significantly improve our understanding of where we choose to fixate in natural scenes.Successfully completing many forms of behaviour requires that humans look in the right place at the right time. Ballard and colleagues described this as a ''do-it-where-I'm-looking'' visual strategy for completing complex tasks (Ballard et al., 1992); a finding that has been replicated across a range of studies of natural behaviour (e.g.
We recorded over 90,000 saccades while observers viewed a diverse collection of natural images and measured low level visual features at fixation. The features that discriminated between where observers fixated and where they did not varied considerably with task, and the length of the preceding saccade. Short saccades (<8 degrees) are image feature dependent, long are less so. For free viewing, short saccades target high frequency information, long saccades are scale-invariant. When searching for luminance targets, saccades of all lengths are scale-invariant. We argue that models of saccade behaviour must account not only for task but also for saccade length and that long and short saccades are targeted differently.
While many current models of scene perception debate the relative roles of low- and highlevel factors in eye guidance, systematic tendencies in how the eyes move may be informative. We consider how each saccade and fixation is influenced by that which preceded or followed it, during free inspection of images of natural scenes. We find evidence to suggest periods of localized scanning separated by ‘global’ relocations to new regions of the scene. We also find evidence to support the existence of small amplitude ‘corrective’ saccades in natural image viewing. Our data reveal statistical dependencies between successive eye movements, which may be informative in furthering our understanding of eye guidance.
A state-of-the-art data analysis procedure is presented to conduct hierarchical Bayesian inference and hypothesis testing on delay discounting data. The delay discounting task is a key experimental paradigm used across a wide range of disciplines from economics, cognitive science, and neuroscience, all of which seek to understand how humans or animals trade off the immediacy verses the magnitude of a reward. Bayesian estimation allows rich inferences to be drawn, along with measures of confidence, based upon limited and noisy behavioural data. Hierarchical modelling allows more precise inferences to be made, thus using sometimes expensive or difficult to obtain data in the most efficient way. The proposed probabilistic generative model describes how participants compare the present subjective value of reward choices on a trial-to-trial basis, estimates participant-and group-level parameters. We infer discount rate as a function of reward size, allowing the magnitude effect to be measured. Demonstrations are provided to show how this analysis approach can aid hypothesis testing. The analysis is demonstrated on data from the popular 27-item monetary choice questionnaire (Kirby, 2009), but will accept data from a range of protocols, including adaptive procedures. The software is made freely available to researchers.Keywords Decision making · Delay discounting · Intertemporal choice · magnitude effect · Time preference · Bayesian estimation · MCMC · Financial psychophysics The analysis code is freely downloadable from https://github. com/drbenvincent/delay-discounting-analysis.
The allocation of overt visual attention while viewing photographs of natural scenes is commonly thought to involve both bottom-up feature cues, such as luminance contrast, and top-down factors such as behavioural relevance and scene understanding. Profiting from the fact that light sources are highly visible but uninformative in visual scenes, we develop a mixture model approach that estimates the relative contribution of various low and high-level factors to patterns of eye movements whilst viewing natural scenes containing light sources. Low-level salience accounts predicted fixations at luminance contrast and at lights, whereas these factors played only a minor role in the observed human fixations. Conversely, human data were mostly explicable in terms of a central bias and a foreground preference. Moreover, observers were more likely to look near lights rather than directly at them, an effect that cannot be explained by low-level stimulus factors such as luminance or contrast. These and other results support the idea that the visual system neglects highly visible cues in favour of less visible object information. Mixture modelling might be a good way forward in understanding visual scene exploration, since it makes it possible to measure the extent that low-level or highlevel cues act as drivers of eye movements.
Has evolution optimized visual selective attention to make the best possible use of all information available? If so, then Bayesian optimal performance in a localization task is achieved by optimally weighting the visual evidence with one's prior spatial expectations. In 2 psychophysical experiments, participants conducted covert target localization where both visual cues and prior expectations were available. The amount of information conveyed by the visual evidence was held constant, while the degree of belief was manipulated via peripheral cuing (Experiment 1) and spatial probabilities (Experiment 2). A number of findings result: (1) People appear to optimally combine slightly biased prior beliefs with sensory evidence. (2) These biases are directly comparable to those descriptively accounted for by the Prospect Theory. (3) Probabilistic information about a target's upcoming location is integrated identically, irrespective of whether endogenous or exogenous cuing is used. (4) In localization tasks, spatial attention can be understood and quantitatively modeled as a set of prior expectations over space that modulate incoming noisy sensory evidence.
Despite embodying fundamentally different assumptions about attentional allocation, a wide range of popular models of attention include a max-of-outputs mechanism for selection. Within these models, attention is directed to the items with the most extreme-value along a perceptual dimension via, for example, a winner-take-all mechanism. From the detection theoretic approach, this MAX-observer can be optimal under specific situations, however in distracter heterogeneity manipulations or in natural visual scenes this is not always the case. We derive a Bayesian maximum a posteriori (MAP)-observer, which is optimal in both these situations. While it retains a form of the max-of-outputs mechanism, it is based on the maximum a posterior probability dimension, instead of a perceptual dimension. To test this model we investigated human visual search performance using a yes/no procedure while adding external orientation uncertainty to distracter elements. The results are much better fitted by the predictions of a MAP observer than a MAX observer. We conclude a max-like mechanism may well underlie the allocation of visual attention, but this is based upon a probability dimension, not a perceptual dimension.
How do our valuation systems change to homeostatically correct undesirable psychological or physiological states, such as those caused by hunger? There is evidence that hunger increases discounting for food rewards, biasing choices towards smaller but sooner food reward over larger but later reward. However, it is not understood how hunger modulates delay discounting for non-food items. We outline and quantitatively evaluate six possible models of how our valuation systems modulate discounting of various commodities in the face of the undesirable state of being hungry. With a repeated-measures design, an experimental hunger manipulation, and quantitative modeling, we find strong evidence that hunger causes large increases in delay discounting for food, with an approximately 25% spillover effect to non-food commodities. The results provide evidence that in the face of hunger, our valuation systems increase discounting for commodities, which cannot achieve a desired state change as well as for those commodities that can. Given that strong delay discounting can cause negative outcomes in many non-food (consumer, investment, medical, or inter-personal) domains, the present findings suggest caution may be necessary when making decisions involving non-food outcomes while hungry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.