2020
DOI: 10.48550/arxiv.2012.13882
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Universal Approximation Theorem for Equivariant Maps by Group CNNs

Abstract: Group symmetry is inherent in a wide variety of data distributions. Data processing that preserves symmetry is described as an equivariant map and often effective in achieving high performance. Convolutional neural networks (CNNs) have been known as models with equivariance and shown to approximate equivariant maps for some specific groups. However, universal approximation theorems for CNNs have been separately derived with individual techniques according to each group and setting. This paper provides a unifie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The convolution operation of the first layer (21) and the convolution operations of the subsequent layers (23) are now not only equivariant to translations, but to all transformations from the group G, expanding translational equivariance by e.g. compositions of translations and rotations for p4 and by compositions of translations with rotations and reflections for p4m:…”
Section: Group Convolution Feature Maps and Equivariancementioning
confidence: 99%
“…The convolution operation of the first layer (21) and the convolution operations of the subsequent layers (23) are now not only equivariant to translations, but to all transformations from the group G, expanding translational equivariance by e.g. compositions of translations and rotations for p4 and by compositions of translations with rotations and reflections for p4m:…”
Section: Group Convolution Feature Maps and Equivariancementioning
confidence: 99%
“…In the last three decades, there have been a large number of studies of the approximation and representation properties for fully connected neural networks with single hidden layer [17,5,4,26,2,20,37,43,38] and deep neural networks (DNNs) with more than one hidden layer [29,42,35,44,28,1,36,13,9,32,12]. To our knowledge, however, there are very few studies of the approximation property of CNNs [3,45,31,46,34,23]. In [3], the authors consider a one-dimensional space (1D) ReLU-CNN that is constituted by a fully connected layer and a sequence of convolution layers and, by showing the identity operator can be realized by an underlying sequence of convolutional layers, they obtain approximation property of CNN directly from that of the underlying fully connected layer.…”
Section: Introductionmentioning
confidence: 99%
“…. .. A generalized study of this function class and its application in approximation properties of CNNs can be found in [23]. In [31], the authors study the approximation properties of ResNet-type CNNs on 1D for the special function class that can be approximated by sparse DNNs.…”
Section: Introductionmentioning
confidence: 99%