2010 IEEE International Conference on Image Processing 2010
DOI: 10.1109/icip.2010.5652588
|View full text |Cite
|
Sign up to set email alerts
|

Learning of structuring elements for morphological image model with a sparsity prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
2
2
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…To the best of our knowledge, we are the first to do this in a flexible and gradient-based framework without any prior. For instance, in classical approaches [13] or more recent ones [11], the operator needs to be fixed a priori. Figure 4-top shows an example of a closing with a line of length 10 and an orientation of 45 • , whereas Figure 4-bottom shows an example of an opening with a square of size 5.…”
Section: Learning Opening and Closingmentioning
confidence: 99%
See 1 more Smart Citation
“…To the best of our knowledge, we are the first to do this in a flexible and gradient-based framework without any prior. For instance, in classical approaches [13] or more recent ones [11], the operator needs to be fixed a priori. Figure 4-top shows an example of a closing with a line of length 10 and an orientation of 45 • , whereas Figure 4-bottom shows an example of an opening with a square of size 5.…”
Section: Learning Opening and Closingmentioning
confidence: 99%
“…However, most of the proposed approaches do not cover all operators. More importantly, they cannot learn both the structuring element and the operator, e.g., [11]. This is obviously a quite important limitation as it makes very hard or even impossible the composition of complex filtering pipelines.…”
Section: Introductionmentioning
confidence: 99%
“…Morphological operations are not fully differentiable however; therefore MNNs cannot readily be trained using linear back-propagation. As a consequence, three different approaches to back-propagating errors over sequences of morphological filters are established in the literature: (1) approximate (linearly) the min, max-operations to make them differentiable [10], [19], [20];…”
Section: Introductionmentioning
confidence: 99%
“…From the chain rule, the term ∂E ∂f+(x) is all the information that is needed to obtain the derivatives of the error with respect to the input f − and the parameterized probe p (or kernel) of ⊗. a consequence, three different approaches to back-propagating errors over sequences of morphological filters are established in the literature: (1) approximate (linearly) the min, max-operations to make them differentiable [10,19,20];…”
mentioning
confidence: 99%