2022
DOI: 10.3390/rs14164066
|View full text |Cite
|
Sign up to set email alerts
|

FusionNet: A Convolution–Transformer Fusion Network for Hyperspectral Image Classification

Abstract: In recent years, deep-learning-based hyperspectral image (HSI) classification networks have become one of the most dominant implementations in HSI classification tasks. Among these networks, convolutional neural networks (CNNs) and attention-based networks have prevailed over other HSI classification networks. While convolutional neural networks with perceptual fields can effectively extract local features in the spatial dimension of HSI, they are poor at capturing the global and sequential features of spectra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(21 citation statements)
references
References 49 publications
0
12
0
Order By: Relevance
“…Niu et al propose a transformer based semantic segmentation model 21 for crop mapping task and prove to be effective. Yang et al proposed a convolutiontransformer fusion network for HSI classification, 22 which fuses the convolution and transformer in both serial and parallel mechanisms to achieve the full utilization of HSI features. Hu et al proposed a transformer-based fusion network for HSI super-resolution, which achieves efficient results.…”
Section: Transformermentioning
confidence: 99%
See 1 more Smart Citation
“…Niu et al propose a transformer based semantic segmentation model 21 for crop mapping task and prove to be effective. Yang et al proposed a convolutiontransformer fusion network for HSI classification, 22 which fuses the convolution and transformer in both serial and parallel mechanisms to achieve the full utilization of HSI features. Hu et al proposed a transformer-based fusion network for HSI super-resolution, which achieves efficient results.…”
Section: Transformermentioning
confidence: 99%
“…Yang et al. proposed a convolution-transformer fusion network for HSI classification, 22 which fuses the convolution and transformer in both serial and parallel mechanisms to achieve the full utilization of HSI features. Hu et al.…”
Section: Related Workmentioning
confidence: 99%
“…The works [50], [51], [52], [53] classify HSIs using a spectral-spatial approach, using steps to extract spatial information and using the spectral signature, and applying the concept of Vision Transformer (ViT). There are authors, as [54], [55], [56], that unite features of CNN and LSTM architecture with Transformer for HSI classification. Some works like [57], [58] also present an extensive comparison of the transformer architecture with other types of architectures, comparing approaches using spectral and spatial information.…”
Section: Introductionmentioning
confidence: 99%
“…Wang [22] proposed a lightweight MILRDA model, which extracts more discriminative features by constructing attention blocks while retaining shallow features, and suppresses useless background by channel attention, and achieves good results on UCM Datasets and other data sets. Yang [23] proposed a fusion network FusionNet for HSI classification and transformation, which integrates convolution and transformation operations in serial and parallel mechanisms, and the experimental results on small-scale data sets are promising. Bai [24] proposed an adaptive dual attention network, in which decibels processed the features from both spectral and spatial angles, strengthening the independent feature representation of the spectrum, improved the ability to search for features, and then aggregated high-level features by evaluating the confidence of effective information in the receptive field, designed the dispersion loss, supervised the learnable parameters, and improved the object recognition performance, making good progress on publicly available datasets.…”
Section: Introductionmentioning
confidence: 99%