Figure-ground segregation is the process by which the visual system identifies image elements of figures and segregates them from the background. Previous studies examined figure-ground segregation in the visual cortex of monkeys where figures elicit stronger neuronal responses than backgrounds. It was demonstrated in anesthetized mice that neurons in the primary visual cortex (V1) of mice are sensitive to orientation contrast, but it is unknown whether mice can perceptually segregate figures from a background. Here, we examined figure-ground perception of mice and found that mice can detect figures defined by an orientation that differs from the background while the figure size, position or phase varied. Electrophysiological recordings in V1 of awake mice revealed that the responses elicited by figures were stronger than those elicited by the background and even stronger at the edge between figure and background. A figural response could even be evoked in the absence of a stimulus in the V1 receptive field. Current-source-density analysis suggested that the extra activity was caused by synaptic inputs into layer 2/3. We conclude that the neuronal mechanisms of figure-ground segregation in mice are similar to those in primates, enabling investigation with the powerful techniques for circuit analysis now available in mice.
Rodents have become a popular model in vision science. It is still unclear how vision in rodents relates to primate vision when it comes to complex visual tasks. Here we report on the results of training rats in a face-categorization and generalization task. Additionally, the Bubbles paradigm is used to determine the behavioral templates of the animals. We found that rats are capable of face categorization and can generalize to previously unseen exemplars. Performance is affected-but remains above chance-by stimulus modifications such as upside-down and contrast-inverted stimuli. The behavioral templates of the rats overlap with a pixel-based template, with a bias toward the upper left parts of the stimuli. Together, these findings significantly expand the evidence about the extent to which rats learn complex visual-categorization tasks.
Nonhuman primates are the main animal model to investigate high-level properties of human cortical vision. For one property, transformation-invariant object recognition, recent studies have revealed interesting and unknown capabilities in rats. Here we report on the ability of rats to rely upon second-order cues that are important to structure the incoming visual images into figure and background. Rats performed a visual shape discrimination task in which the shapes were not only defined by first-order luminance information but also by a variety of second-order cues such as a change in texture properties. Once the rats were acquainted with a first set of second-order stimuli, they showed a surprising degree of generalization towards new second-order stimuli. The limits of these capabilities were tested in various ways, and the ability to extract the shapes broke down only in extreme cases where no local cues were available to solve the task. These results demonstrate how rats are able to make choices based on fairly complex strategies when necessary.
The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed.
Do individuals prefer stimuli that are ordered or disordered, simple or complex, or that strike the right balance of order and complexity? Earlier research mainly focused on the separate influence of order and complexity on aesthetic appreciation. When order and complexity were studied in combination, stimulus manipulations were often not parametrically controlled, only rather specific types of order (i.e., balance or symmetry) were usually studied, and/or the multidimensionality of order and complexity was largely ignored. Progress has also been limited by the lack of an easy way to create reproducible and expandible stimulus sets, including both order and complexity manipulations. The Order & Complexity Toolbox for Aesthetics (OCTA), a Python toolbox that is also available as a point-and-click Shiny application, aims to fill this gap.OCTA provides researchers with a free and easy way to create multi-element displays varying qualitatively (i.e., different types) and quantitatively (i.e., different levels) in order and complexity, based on regularity and variety along multiple element features (e.g., shape, size, color, orientation). The standard vector-based output is ideal for experiments on the web and the creation of dynamic interfaces and stimuli. OCTA will not only facilitate reproducible stimulus construction and experimental design in research on order, complexity, and aesthetics.In addition, OCTA can be a very useful tool in any type of research using visual stimuli, or even to create digital art. To illustrate OCTA's potential, we propose several possible applications and diverse questions that can be addressed using OCTA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.