2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9897301
|View full text |Cite
|
Sign up to set email alerts
|

Differential Invariants for SE(2)-Equivariant Networks

Abstract: Symmetry is present in many tasks in computer vision, where the same class of objects can appear transformed, e.g. rotated due to different camera orientations, or scaled due to perspective. The knowledge of such symmetries in data coupled with equivariance of neural networks can improve their generalization to new samples. Differential invariants are equivariant operators computed from the partial derivatives of a function. In this paper we use differential invariants to define equivariant operators that form… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Particularly, in comparison with L-STN that used linear interpolation, ESTN shows significant improvements with very similar network settings. In addition, ESTN has higher accuracy compared to the recent developed methods such as ScDCFNet [34] and SE3Mov [23].…”
Section: Svhn and Celebamentioning
confidence: 99%
See 1 more Smart Citation
“…Particularly, in comparison with L-STN that used linear interpolation, ESTN shows significant improvements with very similar network settings. In addition, ESTN has higher accuracy compared to the recent developed methods such as ScDCFNet [34] and SE3Mov [23].…”
Section: Svhn and Celebamentioning
confidence: 99%
“…In [34], an equivariant CNN to space and scaling is proposed, which is equivariance for the regular representation of scaling and translation. Moreover, [23] proposed an efficient equivariant network by reducing the computation of moving frames at the input stage rather than repeating computations at all layers. In fact, these approaches support limited transformations because the computational cost increases linearly with the cardinality ratio of the transformation.…”
Section: Related Workmentioning
confidence: 99%
“…In summary, this theory states that convolutions with Gaussian kernels and Gaussian derivatives constitutes a canonical class of image operations as a first layer of visual processing. Such spatial receptive fields, or approximations thereof, can, in turn, be used as basis for expressing a large variety of image operations, both in classical computer vision [8,12,15,42,48,49,51,53,62,63,79,89] and more recently in deep learning [26,32,57,58,69,72,78].…”
Section: Introductionmentioning
confidence: 99%
“…One such domain, and which motivates the present deeper study of discretization effects for Gaussian smoothing operations and Gaussian derivative computations at fine scales, is when applying Gaussian derivative operations in deep networks, as done in a recently developed subdomain of deep learning [26,32,57,58,69,72,78].…”
Section: Introductionmentioning
confidence: 99%
“…Then this feature function is used to detect corners and edges depending where the function takes its local extrema. In DL, the use of intrinsic properties of invariance and equivariance from the data is an active area of research [6,7,8,9]. Two of the main motivations to study equi/in-variance in neural networks are quite intuitive: i) Objects in natural images are not always oriented, scaled or localized in the same way unless the environment of acquisition is highly constrained.…”
Section: Introductionmentioning
confidence: 99%