2021
DOI: 10.3390/rs14010144
|View full text |Cite
|
Sign up to set email alerts
|

Pix2pix Conditional Generative Adversarial Network with MLP Loss Function for Cloud Removal in a Cropland Time Series

Abstract: Clouds are one of the major limitations to crop monitoring using optical satellite images. Despite all efforts to provide decision-makers with high-quality agricultural statistics, there is still a lack of techniques to optimally process satellite image time series in the presence of clouds. In this regard, in this article it was proposed to add a Multi-Layer Perceptron loss function to the pix2pix conditional Generative Adversarial Network (cGAN) objective function. The aim was to enforce the generative model… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…A PIX2PIX GAN's, architecture is made up of two basic components: a generator and a discriminator, as mentioned in the elaboration of GANs since it's a type of GAN algorithm. The generator is in charge of creating the output image, while the discriminator is in charge of assessing the realism of the created image and delivering input to the generator [70].…”
Section: B 2d Model Generationmentioning
confidence: 99%
“…A PIX2PIX GAN's, architecture is made up of two basic components: a generator and a discriminator, as mentioned in the elaboration of GANs since it's a type of GAN algorithm. The generator is in charge of creating the output image, while the discriminator is in charge of assessing the realism of the created image and delivering input to the generator [70].…”
Section: B 2d Model Generationmentioning
confidence: 99%
“…The Pix2Pix, using the discrete digital numbers (DNs) of conventional satellite images that range from 0 to 255, has demonstrated remarkable translation from one image to another image in satellite remote sensing applications [34][35][36]. The D2D framework translates one original data to another using the normalization process as the pre-processing and denormalization as the post-processing, which converts between original satelliteobserved albedo or brightness temperature (TB) and a numerical array each other before and after adversarial learning.…”
Section: Introductionmentioning
confidence: 99%
“…However, such dependency on the input data of MTcGAN implies a need for extensive evaluation. To address this, most previous studies have aimed to utilize an optical image taken on a reference date that is as close as possible to the prediction date to ensure satisfactory prediction performance [17,38]. As the physical conditions of crops change rapidly during the crop growth period, it is essential to analyze the impact of various input cases, such as by considering different image acquisition dates and crop growth stages.…”
Section: Introductionmentioning
confidence: 99%
“…Periodic image acquisition over the area of interest can facilitate the utilization of additional temporal information for SAR-to-optical translation. For example, multi-temporal cGAN (MTcGAN) utilizes SAR and optical image pairs acquired on the same or a similar date (hereafter referred to as a reference date) as well as a single SAR image acquired on a prediction date for extracting temporal change information from multi-temporal images [17,38]. Due to its ability to integrate additional information from an optical image on the reference date, it performed better than conventional SAR-to-optical image translation methods.…”
Section: Introductionmentioning
confidence: 99%