2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00054
|View full text |Cite
|
Sign up to set email alerts
|

FusAtNet: Dual Attention based SpectroSpatial Multimodal Fusion Network for Hyperspectral and LiDAR Classification

Abstract: With recent advances in sensing, multimodal data is becoming easily available for various applications, especially in remote sensing (RS), where many data types like multispectral imagery (MSI), hyperspectral imagery (HSI), Li-DAR etc. are available. Effective fusion of these multisource datasets is becoming important, for these multimodality features have been shown to generate highly accurate land-cover maps. However, fusion in the context of RS is non-trivial considering the redundancy involved in the data … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 122 publications
(51 citation statements)
references
References 40 publications
0
37
0
Order By: Relevance
“…(3) The feature-level fusion method proposed in this paper only contains a single spatial feature. Some researchers have found that the fusion of multiple features can improve the accuracy of land cover classification [63,64]. Therefore, the fusion of multiple features to improve the usability of remote sensing images will be considered in future work.…”
Section: Discussionmentioning
confidence: 99%
“…(3) The feature-level fusion method proposed in this paper only contains a single spatial feature. Some researchers have found that the fusion of multiple features can improve the accuracy of land cover classification [63,64]. Therefore, the fusion of multiple features to improve the usability of remote sensing images will be considered in future work.…”
Section: Discussionmentioning
confidence: 99%
“…Mohla et al (2020) focused on multimodal feature fusion for classification. The authors focused on the "self‐attention" and "cross‐attention”mechanism to explore spectral and spatial features from HSI and LiDAR.…”
Section: Modern Application Areas Of Hyperspectral Imagingmentioning
confidence: 99%
“…• SVM: It adopts the fused hyperspectral data and LiDAR data as input and SVM as the classifier. Since the work in [32] also did the same experiment, it is reasonable to cite the results from [32]. • CHOTF: It is a Coupled Higher-Order Tensor Factorization (CHOTF) model for hyperspectral and LiDAR fusion proposed in [33].…”
Section: A Data Description and Experimental Setupmentioning
confidence: 99%