2011
DOI: 10.1007/978-3-642-21735-7_1
|View full text |Cite
|
Sign up to set email alerts
|

Transformation Equivariant Boltzmann Machines

Abstract: Abstract. We develop a novel modeling framework for Boltzmann machines, augmenting each hidden unit with a latent transformation assignment variable which describes the selection of the transformed view of the canonical connection weights associated with the unit. This enables the inferences of the model to transform in response to transformed input data in a stable and predictable way, and avoids learning multiple features differing only with respect to the set of transformations. Extending prior work on tran… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 46 publications
(29 citation statements)
references
References 14 publications
0
26
0
2
Order By: Relevance
“…Pooling is the use of summary statistics of adjacent outputs in a feature map to determine the activations to be propagated to the subsequent layer. It results in rotationally and translationally invariant features [34]. For a window of size × , average pooling returns the arithmetic mean of signals in the window whereas maxpooling returns the dominant signal in that window.…”
Section: Cnn Building Blocksmentioning
confidence: 99%
See 1 more Smart Citation
“…Pooling is the use of summary statistics of adjacent outputs in a feature map to determine the activations to be propagated to the subsequent layer. It results in rotationally and translationally invariant features [34]. For a window of size × , average pooling returns the arithmetic mean of signals in the window whereas maxpooling returns the dominant signal in that window.…”
Section: Cnn Building Blocksmentioning
confidence: 99%
“…It results in rotationally and translationally invariant features [34]. For a window of size p × p, average pooling returns the arithmetic mean of signals in the window whereas maxpooling returns the dominant signal in that window.…”
Section: Cnn Building Blocksmentioning
confidence: 99%
“…Current approaches do not address the problem of rotation-invariance directly, but use a predefined set of transformations to transform either the input images [19,21] or the learned filters [13,20]. We were inspired by these approaches to modify the RBM learning process, such that to learn invariant features without taking into account all possible transformations, which is demanding and may propagate noise due to pixel interpolations.…”
Section: Discussionmentioning
confidence: 99%
“…In [21], a transformation invariant RBM is proposed, where images are subjected to a predefined set of transformations. In [13] an RBM that learns equivariant features is proposed, whereby adding a new variable to be inferred within the hidden units, this variable is then used to rotate learned weights accordingly. In [19], a rotation (invariant) Convolutional RBM is proposed.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation