2016
DOI: 10.48550/arxiv.1612.04642
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Harmonic Networks: Deep Translation and Rotation Equivariance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 0 publications
0
22
0
Order By: Relevance
“…This approach to training the deep neural network is not absolutely rotationally invariant, however the numerical error experienced due to a rotation was on the same order as the error of the method itself. Recent proposals to modify the network architecture itself to make it rotationally invariant are promising, as the additional training cost incurred with using an augmented dataset could be avoided [59,60].…”
Section: Discussionmentioning
confidence: 99%
“…This approach to training the deep neural network is not absolutely rotationally invariant, however the numerical error experienced due to a rotation was on the same order as the error of the method itself. Recent proposals to modify the network architecture itself to make it rotationally invariant are promising, as the additional training cost incurred with using an augmented dataset could be avoided [59,60].…”
Section: Discussionmentioning
confidence: 99%
“…A lot of recent works are aimed at CNNs. Some works learn invariant CNN representations with respect to specific transformations such as symmetry [11], scale [21] and rotation [52]. However, these works assume the transformations are fixed and known, which restricts their generalization to new tasks with unknown transformations.…”
Section: Related Workmentioning
confidence: 99%
“…RotEqNet obtains an error of 1.09%, a small improvement with respect to the state-of-the-art TI-pooling [16], but with almost 100× less parameters. Test-time data augmentation further reduces the error to 1.01%, thus improving significantly over TI-Pooling and over the more recent H-Net [29] and ORN [31].…”
Section: Methodsmentioning
confidence: 97%
“…Error rate (in %) SVM [17] 10.38±0.27 TIRBM [26] 4.2 H-Net [29] 1.69 ORN [31] 1.54 TI-pooling [16] 1. Results: We first studied the behavior of RotEqNet with respect to the total number of parameters and compared it to the state-of-the-art TI-pooling [16].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation