How does the visual system represent continuity in the constantly changing visual input? A recent proposal is that vision is serially dependent: Stimuli seen a moment ago influence what we perceive in the present. In line with this, recent frameworks suggest that the visual system anticipates whether an object seen at one moment is the same as the one seen a moment ago, binding visual representations across consecutive perceptual episodes. A growing body of work supports this view, revealing signatures of serial dependence in many diverse visual tasks. Yet, the variety of disparate findings and interpretations calls for a more general picture. Here, we survey the main paradigms and results over the past decade. We also focus on the challenge of finding a relationship between serial dependence and the concept of “object identity,” taking centuries-long history of research into account. Among the seemingly contrasting findings on serial dependence, we highlight common patterns that may elucidate the nature of this phenomenon and attempt to identify questions that are unanswered.
Observers can learn complex statistical properties of visual ensembles, such as their probability distributions. Even though ensemble encoding is considered critical for peripheral vision, whether observers learn such distributions in the periphery has not been studied. Here, we used a visual search task to investigate how the shape of distractor distributions influences search performance and ensemble encoding in peripheral and central vision. Observers looked for an oddly oriented bar among distractors taken from either uniform or Gaussian orientation distributions with the same mean and range. The search arrays were either presented in the foveal or peripheral visual fields. The repetition and role reversal effects on search times revealed observers' internal model of distractor distributions. Our results showed that the shape of the distractor distribution influenced search times only in foveal, but not in peripheral search. However, role reversal effects revealed that the shape of the distractor distribution could be encoded peripherally depending on the interitem spacing in the search array. Our results suggest that, although peripheral vision might rely heavily on summary statistical representations of feature distributions, it can also encode information about the distributions themselves.
Recent accounts of perception and cognition propose that the brain represents information probabilistically. While this assumption is common, empirical support for such probabilistic representations in perception has recently been criticized. Here, we evaluate these criticisms and present an account based on a recently developed psychophysical methodology, Feature Distribution Learning (FDL), which provides promising evidence for probabilistic representations by avoiding these criticisms. The method uses priming and role-reversal effects in visual search. Observers' search times reveal the structure of perceptual representations, in which the probability distribution of distractor features is encoded.We explain how FDL results provide evidence for a stronger notion of representation that relies on structural correspondence between stimulus uncertainty and perceptual representations, rather than a mere co-variation between the two. Moreover, such an account allows us to demonstrate what kind of empirical evidence is needed to support probabilistic representations as posited in current probabilistic Bayesian theories of perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.