2018
DOI: 10.1016/j.neucom.2018.02.029
|View full text |Cite
|
Sign up to set email alerts
|

Deep Rotation Equivariant Network

Abstract: Recently, learning equivariant representations has attracted considerable research attention. Dieleman et al. introduce four operations which can be inserted into convolutional neural network to learn deep representations equivariant to rotation. However, feature maps should be copied and rotated four times in each layer in their approach, which causes much running time and memory overhead. In order to address this problem, we propose Deep Rotation Equivariant Network consisting of cycle layers, isotonic layer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(29 citation statements)
references
References 27 publications
0
28
1
Order By: Relevance
“…Dieleman et al () first tried the concept of Cyclic Pooling and Rolling on a different class of rotationally invariant images (galaxy morphology). An alternative method of obtaining rotational augmentations by Li et al () may be more efficient than the one we used from Lasagne (Dieleman et al ). Additionally, iterative augmentation is possible by using early models to iteratively process available unlabeled data in order to harvest additional training images, as described in Luo et al ().…”
Section: Discussionmentioning
confidence: 99%
“…Dieleman et al () first tried the concept of Cyclic Pooling and Rolling on a different class of rotationally invariant images (galaxy morphology). An alternative method of obtaining rotational augmentations by Li et al () may be more efficient than the one we used from Lasagne (Dieleman et al ). Additionally, iterative augmentation is possible by using early models to iteratively process available unlabeled data in order to harvest additional training images, as described in Luo et al ().…”
Section: Discussionmentioning
confidence: 99%
“…Recently there has been an explosion of interest into CNNs with predefined transformation equivariances, beyond translation [2,3,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19] [20, 21,22,23]. However, with the exception of Cohen and Welling [23] (projections on sphere), Kondor [22] (point clouds), and Thomas et al [20] (point clouds), these have mainly focused on the 2D scenario.…”
Section: Related Workmentioning
confidence: 99%
“…(Fasel and Gatica-Perez 2006) and (Dieleman, Willett, and Dambre 2015) rotate the input itself before feeding it into stacks of CNNs and generating rotation invariant representations through gradual pooling or parameter sharing. (Teney and Hebert 2016;Wu, Hu, and Kong 2015;Li et al 2017) rotate the convolution filters (a cheaper albeit still expensive operation) instead of transforming the input followed by pooling. A similar approach was explored for scale by (Xu et al 2014).…”
Section: Prior Artmentioning
confidence: 99%
“…However, a number of studies have observed approximate albeit sufficient invariances in practice under this setting (Anselmi et al 2013;Pal, Juefei-Xu, and Savvides 2016;Pal et al 2017;Liao, Leibo, and Poggio 2013). The main motive for modelling transformations as unitary groups was to provide a theoretical connection to ConvNets and other methods that enforce other kinds of unitary invariance such as rotation invariance (Li et al 2017;Wu, Hu, and Kong 2015). However, real-world data experiences a large array of transformations acting, which certainly lie outside the span of unitary transformations.…”
Section: The Transformation Network Paradigmmentioning
confidence: 99%
See 1 more Smart Citation