2021 IEEE International Conference on Image Processing (ICIP) 2021
DOI: 10.1109/icip42928.2021.9506260
|View full text |Cite
|
Sign up to set email alerts
|

Self-Organized Residual Blocks For Image Super-Resolution

Abstract: It has become a standard practice to use the convolutional networks (ConvNet) with RELU non-linearity in image restoration and super-resolution (SR). Although the universal approximation theorem states that a multi-layer neural network can approximate any non-linear function with the desired precision, it does not reveal the best network architecture to do so. Recently, operational neural networks (ONNs) that choose the best non-linearity from a set of alternatives, and their "self-organized" variants (Self-ON… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…A self-organized residual (SOR) block can be obtained by replacing all regular convolutional layers in a residual block with self-organized layers (SOL) formed by generative neurons without the activation function σ(). (Keleş et al, 2021a), where SOL stands for a layer formed by generative neurons without an activation function.…”
Section: Self-organized Residual Blocksmentioning
confidence: 99%
See 2 more Smart Citations
“…A self-organized residual (SOR) block can be obtained by replacing all regular convolutional layers in a residual block with self-organized layers (SOL) formed by generative neurons without the activation function σ(). (Keleş et al, 2021a), where SOL stands for a layer formed by generative neurons without an activation function.…”
Section: Self-organized Residual Blocksmentioning
confidence: 99%
“…Figure 2.8: Illustration of SOR block(Keleş et al, 2021a), where SOL stands for a layer formed by generative neurons without an activation function.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, to address the aforementioned limitations, we propose Operational Segmentation Network (OSegNet) that performs COVID-19 pneumonia segmentation for the diagnosis using CXR images. Contrary to convolutional layers used in many deep networks, operational layers with generative neurons of Self-Organized Operational Neural Networks (Self-ONNs) [18][19][20][21][22] are used in the decoder block. Self-ONNs are heterogeneous network models with generative neurons that can create any non-linear transformation in each kernel element.…”
Section: Introductionmentioning
confidence: 99%
“…In this neuron model, the non-linear transformation function is approximated via Taylor-series expansion and the model's weights (trainable parameters) are the approximated function coefficients. It is shown that in several classification and denoising applications, the Self-Organized Operational Neural Networks (Self-ONNs) with the 2D-operational layers [13][14][15][16] encapsulating generative neurons have achieved improved performance levels compared to the Convolutional Neural Networks (CNNs) with Fig. 1.…”
Section: Introductionmentioning
confidence: 99%