2022
DOI: 10.36227/techrxiv.20330667.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Geometric Back-Propagation in Morphological Neural Networks

Abstract: <p>This paper provides a definition of back-propagation through geometric correspondences for morphological neural networks. In addition, dilation layers are shown to learn probe geometry by erosion of layer inputs and outputs. A proof-of-principle is provided, in which predictions and convergence of morphological networks significantly outperform convolutional networks.</p>

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…This approach has shown promise in achieving improved performance and greater flexibility in network design. Also, recent research [25] in this area makes it possible to directly calculate gradients for backpropagation, but so far it has only been carried out for one-layer networks and cannot be used for real tasks.…”
Section: Morphological Networkmentioning
confidence: 99%
“…This approach has shown promise in achieving improved performance and greater flexibility in network design. Also, recent research [25] in this area makes it possible to directly calculate gradients for backpropagation, but so far it has only been carried out for one-layer networks and cannot be used for real tasks.…”
Section: Morphological Networkmentioning
confidence: 99%
“…Dimitriades and Maragos investigated the use of purely morphological architectures in computer vision classification and found that these models performed slightly worse than their convolutional counterparts [29]. This has led to an increase in the use of hybrid morphological-convolution network architectures in various domains [28,[30][31][32][33][34]. The sparsity induced by replacing linear operators (such as traditional 2D convolution) with morphological ones has been previously examined in the literature [22,29,35].…”
Section: Related Workmentioning
confidence: 99%