2018
DOI: 10.1109/tgrs.2018.2810208
|View full text |Cite
|
Sign up to set email alerts
|

Missing Data Reconstruction in Remote Sensing Image With a Unified Spatial–Temporal–Spectral Deep Convolutional Neural Network

Abstract: Because of the internal malfunction of satellite sensors and poor atmospheric conditions such as thick cloud, the acquired remote sensing data often suffer from missing information, i.e., the data usability is greatly reduced. In this paper, a novel method of missing information reconstruction in remote sensing images is proposed. The unified spatial-temporalspectral framework based on a deep convolutional neural network (STS-CNN) employs a unified deep convolutional neural network combined with spatial-tempor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
154
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 337 publications
(179 citation statements)
references
References 46 publications
0
154
0
1
Order By: Relevance
“…The impact of clouds on optical satellite imagery can be of major concern, especially in tropical locations and regions with variable topography. Zhang et al [171] proposed a CNN-based approach for thick cloud removal and demonstrated their method using Landsat Thematic Mapper images. Sun et al [172] applied DL and a land surface model to extend the terrestrial total water storage (TWS) data from the Gravity Recovery and Climate (GRACE) satellite mission, which was decommissioned in 2017.…”
Section: Spatial and Temporal Data Fusionmentioning
confidence: 99%
“…The impact of clouds on optical satellite imagery can be of major concern, especially in tropical locations and regions with variable topography. Zhang et al [171] proposed a CNN-based approach for thick cloud removal and demonstrated their method using Landsat Thematic Mapper images. Sun et al [172] applied DL and a land surface model to extend the terrestrial total water storage (TWS) data from the Gravity Recovery and Climate (GRACE) satellite mission, which was decommissioned in 2017.…”
Section: Spatial and Temporal Data Fusionmentioning
confidence: 99%
“…To capture multi-scale spatial information, the GoogLeNet inception module proposed by Szegedy et al [34] concatenates the outputs of different-sized filters, e.g., 3 × 3, 5 × 5, 7 × 7, assuming that each filter can capture information at the corresponding scale. Recently, the inception module has been utilized for image reconstruction and fusion tasks and has achieved state-of-the-art performance [35][36][37]. However, the increase of the size of the filters will inevitably result in an increase of parameters, which may not be appropriate in the case of the insufficient prior images (in our case, only two fine and coarse image pairs are available for training for the spatiotemporal fusion task).…”
Section: Network Architecturementioning
confidence: 99%
“…Particularly, convolutional neural network (CNNs) [70] have become a widely representative deep model due to their feature detection power, which drastically improves the classification and detection of objects. As result, the CNN is able to reach good generalization in HSI classification [71][72][73]. In particular, its kernels-based architecture allows to naturally integrate the spectral and spatial information contained in the HSI in a simple and natural way, taking into account not only the spectral signature of each pixel x i but also the spectral information of a d × d neighborhood (also called patch) that surrounds it, denoted by p i ∈ R d×d×n bands .…”
Section: Deep Neural Network For Hyperspectral Image Classificationmentioning
confidence: 99%