2016
DOI: 10.3758/s13414-016-1191-7
|View full text |Cite
|
Sign up to set email alerts
|

An applet for the Gabor similarity scaling of the differences between complex stimuli

Abstract: It is widely accepted that after the first cortical visual area, V1, a series of stages achieves a representation of complex shapes, such as faces and objects, so that they can be understood and recognized. A major challenge for the study of complex shape perception has been the lack of a principled basis for scaling of the physical differences between stimuli so that their similarity can be specified, unconfounded by early-stage differences. Without the specification of such similarities, it is difficult to m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(21 citation statements)
references
References 25 publications
0
21
0
Order By: Relevance
“…Finally, in addition to these shape features, we also computed an independent measure of shape similarity using the Malsburg Gabor-jet model (Lades et al, 1993;Margalit, Biederman, Herald, Yue, & von der Malsburg, 2016), which has been shown to robustly track human discrimination performance for metric differences between shapes (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012). Inspired by the Gabor-like filtering of simple cells in V1 (Jones & Palmer, 1987), this model overlays sets (or "jets") of 40 Gabor filters (5 scales × 8 orientations) on each pixel of a 128 × 128-pixel image and calculates the convolution of the input image with each filter, storing both the magnitude and the phase of the filtered image.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, in addition to these shape features, we also computed an independent measure of shape similarity using the Malsburg Gabor-jet model (Lades et al, 1993;Margalit, Biederman, Herald, Yue, & von der Malsburg, 2016), which has been shown to robustly track human discrimination performance for metric differences between shapes (Yue, Biederman, Mangini, von der Malsburg, & Amir, 2012). Inspired by the Gabor-like filtering of simple cells in V1 (Jones & Palmer, 1987), this model overlays sets (or "jets") of 40 Gabor filters (5 scales × 8 orientations) on each pixel of a 128 × 128-pixel image and calculates the convolution of the input image with each filter, storing both the magnitude and the phase of the filtered image.…”
Section: Resultsmentioning
confidence: 99%
“…Average psychophysical distances according to the Gabor-jet model(Lades et al, 1993;Margalit et al, 2016) …”
mentioning
confidence: 99%
“…This may be accounted for by the fact that while the scrambled and experimental stimuli were originally designed to have identical physical size (in terms of pixel count; for examples of stimuli, see Figure 1 ), it might be that the density and distribution of these pixels differ across the different types of stimuli, leading to this unexpected effect. This interpretation was supported by an image analysis using V1-like Gabor jet-filter model (Yue et al, 2012; Margalit et al, 2016, see Supplementary Information).…”
Section: Resultsmentioning
confidence: 64%
“…Gabor-jet (GBJ) model. The GBJ model is a low-level model of image similarity inspired by the response profile of complex cells in early visual cortex 46 . It has been shown to scale with human psychophysical dissimilarity judgments of faces and simple objects 77 .…”
Section: Methodsmentioning
confidence: 99%
“…In all cases, we examined the unique contributions of skeletal structures in object recognition by contrasting the shape skeleton with models of vision that do not explicitly incorporate a skeletal structure, but are nevertheless predictive of human object recognition. These models included those that describe visual similarity by their image statistics, namely, the Gabor-Jet (GBJ) model 46 and GIST model 47 , as well as biologically plausible neural networks models, namely, the HMAX model 40 and AlexNet, a CNN pre-trained to identify objects 41 . To anticipate our findings, a model of skeletal similarity was predictive of participants' perceptual similarity and classification judgments even when accounting for these other models, suggesting that skeletal descriptions of shape play a crucial role in human object recognition, independent of other models of shape and object perception.…”
Section: A Bmentioning
confidence: 99%