2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) 2017
DOI: 10.1109/igarss.2017.8126930
|View full text |Cite
|
Sign up to set email alerts
|

Nonnegative sparse autoencoder for robust endmember extraction from remotely sensed hyperspectral images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(11 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…This leads to vanishing gradients during training. The methods in [19,82] use a parameterized sigmoid activation function. The ReLU activation largely mitigates the problem of vanishing gradients, but the function is still saturated to the left, and hidden units can become stuck at zero.…”
Section: B Choice Of Activation Function For Hidden Layersmentioning
confidence: 99%
See 1 more Smart Citation
“…This leads to vanishing gradients during training. The methods in [19,82] use a parameterized sigmoid activation function. The ReLU activation largely mitigates the problem of vanishing gradients, but the function is still saturated to the left, and hidden units can become stuck at zero.…”
Section: B Choice Of Activation Function For Hidden Layersmentioning
confidence: 99%
“…All the methods in [19,71,78,82,84,88,95,96] use the MSE measure as the fidelity term of the network's loss function. It is also possible to use a combination of both scale-invariant and non-scale-invariant loss terms.…”
Section: F Choices Of Loss Fidelity Function and Spectral Variabilitymentioning
confidence: 99%
“…Recent applications of neural network methods for unmixing are [23], [24], and [25] where an autoencoder is used for abundance estimation, i.e., mapping input spectra to abundance fractions but not extracting the actual endmembers. In [26], a shallow symmetric, i.e., the encoder and decoder have tied weights, nonnegative sparse autoencoder is used to extract endmembers. The novelty of this method lies in the use of an automatic sampler with a local outlier factor and affinity propagation for intelligently selecting samples for the training set.…”
Section: Introductionmentioning
confidence: 99%
“…Almost all other deep learning based methods for HSU do not perform blind unmixing, i.e., estimate both the endmember spectra and their abundances. To the authors' best knowledge, the only deep learning methods that perform blind unmixing, are the methods in [26]- [29]. The proposed method differs mainly from these methods by having both a deep encoder and being able to exploit the sparsity of abundances through a layer using a custom activation function instead of explicit sparsity regularization.…”
Section: Introductionmentioning
confidence: 99%
“…To tackle nonlinearities, in [16], nonlinear kernelized NMF was presented. More recently, unmixing based on autoencoders [17][18][19][20][21][22] are employed to estimate endmembers and fractional abundances simultaneously. However, these algorithms are either limited to the linear mixing model or implementation of an existing nonlinear (bilinear) model to the autoencoder framework.…”
Section: Introductionmentioning
confidence: 99%