2020
DOI: 10.48550/arxiv.2006.10853
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Image classification in frequency domain with 2SReLU: a second harmonics superposition activation function

Abstract: Deep Convolutional Neural Networks are able to identify complex patterns and perform tasks with super-human capabilities. However, besides the exceptional results, they are not completely understood and it is still impractical to hand-engineer similar solutions. In this work, an image classification Convolutional Neural Network and its building blocks are described from a frequency domain perspective. Some network layers have established counterparts in the frequency domain like the convolutional and pooling l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…In [18], another kind of spectral ReLU operation was proposed (called 2SReLU) that adds low-frequency components with their second harmonics, and this method has two hyperparameters to adjust each frequency contribution to the final result. The equation of 2SReLU is as follows:…”
Section: Acceleration Of Network In the Fourier Domainmentioning
confidence: 99%
See 1 more Smart Citation
“…In [18], another kind of spectral ReLU operation was proposed (called 2SReLU) that adds low-frequency components with their second harmonics, and this method has two hyperparameters to adjust each frequency contribution to the final result. The equation of 2SReLU is as follows:…”
Section: Acceleration Of Network In the Fourier Domainmentioning
confidence: 99%
“…There are existing methodologies in the literature which exploit the advantageous property of the Fourier transform or other spectral methods, but all of these substitute only specific computational building blocks in the Fourier domain and return from it with an inverse transformation, which adds extra computation to the system. Some of these go back to the time domain directly after the convolution part to apply the nonlinear activation and the downsampling step (e.g., [14,15], but there exist solutions, which provide an approximation to implement pooling and nonlinear activation functions in the frequency domain as well (e.g., [16][17][18]; thus, even in these architectures, one inverse Fourier transformation is applied at the last layer of the network.…”
Section: Introductionmentioning
confidence: 99%