2019
DOI: 10.1109/jstars.2019.2936771
|View full text |Cite
|
Sign up to set email alerts
|

Toward Generalized Change Detection on Planetary Surfaces With Convolutional Autoencoders and Transfer Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 35 publications
(27 citation statements)
references
References 71 publications
0
27
0
Order By: Relevance
“…In the deep feature learning phase, the AI model is usually supervised, pre-trained with sufficient labeled samples in other domain data [67,110,168]. The fine-tuning phase is optional and, in this phase, only a small number of labeled samples are required for fine-tuning [125,[169][170][171][172] or additional classifier training [90,140]. Therefore, the change map can be directly obtained by the trained classifier.…”
Section: Transfer Learning-based Structurementioning
confidence: 99%
See 3 more Smart Citations
“…In the deep feature learning phase, the AI model is usually supervised, pre-trained with sufficient labeled samples in other domain data [67,110,168]. The fine-tuning phase is optional and, in this phase, only a small number of labeled samples are required for fine-tuning [125,[169][170][171][172] or additional classifier training [90,140]. Therefore, the change map can be directly obtained by the trained classifier.…”
Section: Transfer Learning-based Structurementioning
confidence: 99%
“…The commonly used AE models are stacked AEs [97,98,104], stacked denoising AEs [16,101,106,[121][122][123]151,160,188], stacked fisher AEs [189], sparse AEs [80], denoising AEs [102], fuzzy AEs [105], and contractive AEs [99,103]. These AEs preserve spatial information by expanding pixel neighborhoods into vectors, while convolutional AEs are implemented directly through convolution kernels [170,190]. According to its characteristics, AEs can be used to implement change detection in an unsupervised manner and perform well.…”
Section: Autoencodermentioning
confidence: 99%
See 2 more Smart Citations
“…Finally, the transfer learning-based structure has been recently investigated to alleviate the lack of training samples and optimize the training process in a semi-supervised scenario. Transfer learning uses training in one domain to enable better results in another domain and, specifically, the lower to midlevel features learned in the original domain can be transferred as useful features in the new domain performing a fine-tuning according to a few labelled samples (Kerner et al, 2019;Larabi et al, 2019) 3 Preliminary concepts A MS/HS sensor records reflected light in tens (MS)/hundreds (HS) of narrow frequencies covering the visible, near-infrared and shortwave infrared bands of a wavelength λ (also called spectrum). The spectrum is an M-dimensional feature vector (spectral feature vector), so that λ is spanned on M numeric spectral features (bands) λ 1 , λ 2 , .…”
Section: Related Workmentioning
confidence: 99%