2021 55th Asilomar Conference on Signals, Systems, and Computers 2021
DOI: 10.1109/ieeeconf53345.2021.9723377
|View full text |Cite
|
Sign up to set email alerts
|

Truly Shift-Equivariant Convolutional Neural Networks with Adaptive Polyphase Upsampling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…In practice, equivariance may be broken by incorporating additional operations such as downsampling and upsampling, although this issue can be overcome by using suitable replacements of these operations. This can be done for instance using adaptive polyphase upsampling and downsampling, and was applied to some Fourier-based computational imaging tasks in [25]. It is worth noting that even so, edge effects that arise as a result of bounded image domains will always prevent exact translational equivariance from holding.…”
Section: Equivariance By Designmentioning
confidence: 99%
“…In practice, equivariance may be broken by incorporating additional operations such as downsampling and upsampling, although this issue can be overcome by using suitable replacements of these operations. This can be done for instance using adaptive polyphase upsampling and downsampling, and was applied to some Fourier-based computational imaging tasks in [25]. It is worth noting that even so, edge effects that arise as a result of bounded image domains will always prevent exact translational equivariance from holding.…”
Section: Equivariance By Designmentioning
confidence: 99%
“…Machine learning algorithms employed to predict properties such as potential energy and atomic forces must yield consistent results, regardless of the molecule's rotational pose or ordering. To address this challenge, researchers have developed group-equivariant neural networks that preserve these symmetries [5,[7][8][9][10]. In a group-equivariant network, symmetry operations on the data, including rotations of pictures and molecules, and permutations of the labels of each particle, commute with the network's layers, ensuring that the same physical property is predicted irrespective of the input's orientation.…”
Section: Introductionmentioning
confidence: 99%
“…However, recent research has reported that CNNs can significantly degrade classification accuracy even with a single pixel translation [11], [12], and many shift-invariant feature representation methods have been proposed [13], [14]. Manfredi et al [15] reported performance degradation in object detection with small translations.…”
Section: Introductionmentioning
confidence: 99%