2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00021
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent Dilated DenseNets for a Time-Series Segmentation Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…However, we remark that these methods Schuster et al, 2019;Takahashi and Mitsufuji, 2021] simply use convolution with different dilation rates to compute the multiscale feature descriptor, while the fine-grained features could be over-smoothed and become indistinguishable after subsampling. This prominent problem is also known as aliasing [Gong and Poellabauer, 2018], which degrades the performance of CNN-based recognition tasks Fuchs et al, 2019]. Inspired by an interesting property of recursivity principle in scale-space [Pauwels et al, 1995], this paper presents recursive Hermite polynomials to prevent the occurrence of aliasing in fine-grained features.…”
Section: Multi-scale Representation Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we remark that these methods Schuster et al, 2019;Takahashi and Mitsufuji, 2021] simply use convolution with different dilation rates to compute the multiscale feature descriptor, while the fine-grained features could be over-smoothed and become indistinguishable after subsampling. This prominent problem is also known as aliasing [Gong and Poellabauer, 2018], which degrades the performance of CNN-based recognition tasks Fuchs et al, 2019]. Inspired by an interesting property of recursivity principle in scale-space [Pauwels et al, 1995], this paper presents recursive Hermite polynomials to prevent the occurrence of aliasing in fine-grained features.…”
Section: Multi-scale Representation Learningmentioning
confidence: 99%
“…This is especially obvious for the fine-grained features with much higher frequency. Thus, an appropriate lowpass filter for anti-artifacts (e.g., a standard convolution filter Fuchs et al, 2019]) is needed to boost the accuracy of dense prediction tasks.…”
Section: Introductionmentioning
confidence: 99%
“…To avoid this problem, most CNN architectures that involve dilated convolution are carefully designed to allow earlier layers to learn appropriate anti-aliasing filter if necessary, i.e., standard convolutions are applied before dilated convolutions with fixed dilation factor [1,45], or the dilation factors is gradually increased as the layer goes deeper [46,43]. A naive combination of DenseNet with dilation has already been proposed [10], where dilated convolutions are used and the dilation factor was set to one at the initial layer and doubled as the layer goes deeper. However, this approach has significant aliasing due to skip connections, as discussed in Sec.…”
Section: Dilated Convolution and Aliasingmentioning
confidence: 99%
“…It is inefficient to have many parameters, especially for high-resolution data, especially when local features are converted to global ones. In [21,22], a network design was proposed that combines the advantages of DenseNet with the advantages of dilated convolution. In fact, typical dilated convolutions were used, and the dilatation factors were calculated based on the layer depth, resulting in significant aliasing effects.…”
Section: Introductionmentioning
confidence: 99%