2020
DOI: 10.1080/01431161.2020.1724346
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral unmixing using deep convolutional autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…2, which is composed of four main phases: preprocessing, training, testing, and evaluation. The proposed workflow was inspired by semantic segmentation due to their booming performance in several applications, such as scene comprehension, 32 processing satellite images, 15,33 and object detection in satellite images. 34 UNet model, 35 which is considered a famous and effective semantic segmentation architecture, is used in the training phase to identify the change regions.…”
Section: Proposed Workflowmentioning
confidence: 99%
See 1 more Smart Citation
“…2, which is composed of four main phases: preprocessing, training, testing, and evaluation. The proposed workflow was inspired by semantic segmentation due to their booming performance in several applications, such as scene comprehension, 32 processing satellite images, 15,33 and object detection in satellite images. 34 UNet model, 35 which is considered a famous and effective semantic segmentation architecture, is used in the training phase to identify the change regions.…”
Section: Proposed Workflowmentioning
confidence: 99%
“…The availability of satellite instruments, the enormous amount of data acquired, and the availability of computational power has enabled a deeper neural network to introduce a new challenges in the earth science domain. 15,16 Recent advances in DL have demonstrated state-of-the-art results in pattern recognition tasks, mainly in image processing and speech recognition. 17,18 Modern convolutional neural network (CNN) architectures [19][20][21] tend to contain enormous hidden layers and millions of neurons, allowing them to concurrently learn hierarchical features for a broad class of patterns from data and achieve well-tailored models for the targeted application.…”
Section: Introductionmentioning
confidence: 99%
“…This problem is solved by applying a transformation MNF or also called cascading PCA [1]. Its most notable difference is that MNF considers noise within the data set, while PCA only considers the variations of each vector [25]. The noise value existing in the dataset is not always constant everywhere, but the variance of the noisy data is greater than the variance of the actual data, so the principal components obtained with MNF are better [1].…”
Section: Minimum Noise Fraction (Mnf)mentioning
confidence: 99%
“…The concept of auto-encoder was proposed earlier, and was originally applied to high-dimensional complex data processing, which promoted the development of neural networks [42][43][44] . The self-encoder is an unsupervised learning algorithm in the deep learning algorithm, to be more precise, a self-supervised learning algorithm whose label data is derived from the input samples.…”
Section: Automatic Encodermentioning
confidence: 99%