2022
DOI: 10.1109/jstars.2022.3144587
|View full text |Cite
|
Sign up to set email alerts
|

CFNet: A Cross Fusion Network for Joint Land Cover Classification Using Optical and SAR Images

Abstract: As two of the most widely-used remote sensing images, optical and synthetic aperture radar (SAR) images show abundant and complementary information on the same target owing to their individual imaging mechanisms. Consequently, using optical and SAR images simultaneously can better describe the inherent features of the target and thus be beneficial for subsequent remote sensing applications. In this paper, we propose a novel modular fully convolutional network model to improve the accuracy of land cover classif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
23
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(23 citation statements)
references
References 37 publications
0
23
0
Order By: Relevance
“…Considering the inputs use the same modality, weight sharing ensures that the network produces similar feature representation for the two inputs. However, optical and SAR inputs show different characteristics that need to be processed using different networks [29]. Furthermore, in our case, pre-change input consists of two different modalities (optical and SAR), while the post-change input contains only SAR data.…”
Section: Introductionmentioning
confidence: 99%
“…Considering the inputs use the same modality, weight sharing ensures that the network produces similar feature representation for the two inputs. However, optical and SAR inputs show different characteristics that need to be processed using different networks [29]. Furthermore, in our case, pre-change input consists of two different modalities (optical and SAR), while the post-change input contains only SAR data.…”
Section: Introductionmentioning
confidence: 99%
“…To further demonstrate the effectiveness of our fusion method, we compare it with other fusion strategies that are widely used in RGB-Depth and RGB-Thermal perception tasks, including: channel-wise weighted feature fusion (CWF) [14], cross gates (CRGs) [15], cross reference module (CRM) [16], gated information fusion (GIF) [17], and a fusion method for PAN and MS data fusion, i.e., the adaptive feature fusion module (AFFM) [18]. For CWF, CRGs, and GIF, we reimplement them in strict accordance with the paper; for SCA and AFMM, we use the codes the authors provided.…”
Section: F Overall Resultsmentioning
confidence: 99%
“…CRGs [15] generates channel weights for PAN and MS modalities, respectively, and then applies them crosswise, as shown in Fig. 7(b).…”
Section: F Overall Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…[2], [3] and semantic segmentation [4], [5]. While clouds are characterised in great detail [6], [7] and different approaches for handling them have been investigated, less effort has been spent to investigate what exactly its effects on remote sensing applications are.…”
Section: Introductionmentioning
confidence: 99%