2006
DOI: 10.1162/neco.2006.18.11.2680
|View full text |Cite
|
Sign up to set email alerts
|

Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

Abstract: Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(38 citation statements)
references
References 55 publications
0
37
0
Order By: Relevance
“…This is not surprising as the structure is dictated by natural images. However, a novel class of second-layer functions emerges in our model that is not seen in the published work on the density components model (Karklin & Lewicki, 2003a, 2003b or related models (Schwartz et al, 2006). This class of functions represents elongated, collinear edge structure in the image domain (see Figure 5b).…”
Section: Learned Amplitudementioning
confidence: 87%
See 1 more Smart Citation
“…This is not surprising as the structure is dictated by natural images. However, a novel class of second-layer functions emerges in our model that is not seen in the published work on the density components model (Karklin & Lewicki, 2003a, 2003b or related models (Schwartz et al, 2006). This class of functions represents elongated, collinear edge structure in the image domain (see Figure 5b).…”
Section: Learned Amplitudementioning
confidence: 87%
“…Models that learn form-selective invariances and focus on performance evaluation of object recognition tasks (Wallis & Rolls, 1997;LeCun et al, 2004;Serre et al, 2007) often have the specific invariance built in to the model structure, and the higher-order features that emerge beyond these built-in invariances have not been explored. The model we propose here bears similarities to the density components model of Karklin and Lewicki (2005) and to the hierarchical GSM model of Schwartz, Sejnowski, and Dayan (2006), which learn higher-order structure in images by modeling the dependencies in scale among oriented filter responses. Our model differs in that form-selective invariances are learned from the temporally persistent structure contained in natural movies as opposed to static image patches.…”
Section: Introductionmentioning
confidence: 99%
“…This argument, together with effective nonlinear SFA models of the visual system (Berkes & Wiskott, 2005;Franzius et al, 2007), indicates that sensory systems are tailored to extract (relevant) predictive information. For further research, we suggest comparing temporally local predictive coding and slow feature analysis to generative hierarchical models for learning nonlinear statistical regularities (Karklin & Lewicki, 2005;Schwartz, Sejnowski, & Dayan, 2006).…”
Section: Discussionmentioning
confidence: 99%
“…Compared to other computational models accounting for extraclassical receptive field properties, our model complies with physiology better than predictive coding -model which assumes separate units detecting error and predicting incoming input (Rao and Ballard 1999;Spratling 2010), and gaussian scale mixture models, which have no obvious direct relationship to physiology (Schwartz et al 2006) even though recently Schwartz et al (2009) viewed their model as commensurate with divisive normalization (Heeger 1992).…”
Section: Physiology and Theory Of Contextual Modulationmentioning
confidence: 90%