2022
DOI: 10.1109/jstars.2022.3188565
|View full text |Cite
|
Sign up to set email alerts
|

Residual Dense Autoencoder Network for Nonlinear Hyperspectral Unmixing

Abstract: Hyperspectral unmixing is a popular research topic in hyperspectral processing, aiming at obtaining the ground features contained in the mixed pixels and their proportion. Recently, nonlinear mixing models have received particular attention in hyperspectral decomposition since the linear mixing model cannot suitably apply in the situation that exists in multiple scattering. In this study, we constructed a residual dense autoencoder network (RDAE) for nonlinear hyperspectral unmixing in multiple scattering scen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 69 publications
(90 reference statements)
0
1
0
Order By: Relevance
“…It was trained by a loss function including a quadratic term based on the Hapke model, the reconstruction error of the reflectances, and a minimum volume total variation (TV) term. A residual dense autoencoder network is constructed in [38] for nonlinear decomposition in multiple scattering scenes. And a new type of deep automatic encoder network is designed based on the generalized bilinear model in [39].…”
Section: Model-guided DL Methodsmentioning
confidence: 99%
“…It was trained by a loss function including a quadratic term based on the Hapke model, the reconstruction error of the reflectances, and a minimum volume total variation (TV) term. A residual dense autoencoder network is constructed in [38] for nonlinear decomposition in multiple scattering scenes. And a new type of deep automatic encoder network is designed based on the generalized bilinear model in [39].…”
Section: Model-guided DL Methodsmentioning
confidence: 99%
“…Two main parts make up the DualMAF module: channel attention methods and atrous convolution with various expansion rates. Rich feature information can be captured by the atrous convolution method, which can increase the convolution kernel's receptive field without changing the computational parameters [28]. Moreover, channel attention is added to reinforce the interaction between several sequences in order to enhance the feature recognition ability.…”
Section: Dual-path Multi-scale Attention Fusion Modulementioning
confidence: 99%