2022
DOI: 10.3390/rs14194941
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Convolution Self-Attention Network for Land-Cover Classification in VHR Remote-Sensing Images

Abstract: The current deep convolutional neural networks for very-high-resolution (VHR) remote-sensing image land-cover classification often suffer from two challenges. First, the feature maps extracted by network encoders based on vanilla convolution usually contain a lot of redundant information, which easily causes misclassification of land cover. Moreover, these encoders usually require a large number of parameters and high computational costs. Second, as remote-sensing images are complex and contain many objects wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 54 publications
0
0
0
Order By: Relevance
“…To illustrate, input and feature fusions (see Fig. 3 at bottom-left) [72], [74]- [80], or feature and decision fusions (see Fig. 3 at bottom-right) [33], [58], [81] are used together, improving the prediction performance compared to using a single layer fusion.…”
Section: A Where To Fuse?mentioning
confidence: 99%
See 2 more Smart Citations
“…To illustrate, input and feature fusions (see Fig. 3 at bottom-left) [72], [74]- [80], or feature and decision fusions (see Fig. 3 at bottom-right) [33], [58], [81] are used together, improving the prediction performance compared to using a single layer fusion.…”
Section: A Where To Fuse?mentioning
confidence: 99%
“…Nevertheless, in RS image-based applications, the use-cases have been extended as a way to enhance the information. Then, attention has been applied across spatial [100], [177], spectral [123], [134], [181], spatio-spectral [78], [80], [136], [139], or vector [138] dimensions. The motivation is that attention mechanisms lead to adaptively enhance the most relevant features of the data.…”
Section: Modeling Considerationsmentioning
confidence: 99%
See 1 more Smart Citation