2018 26th International Conference on Geoinformatics 2018
DOI: 10.1109/geoinformatics.2018.8557110
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Network for Deblurring of Remote Sensing Image

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…Each DRB block contains three 3 × 3 convolution layers with a batch normalisation layer. Consider an l th convolution layer output xl$x_{l}$ as input of DRB and rfalse(·false)$r(\cdot )$ be a residual mapping function, then each DRB block can be presented mathematically as [123], R=rxl,Wi+xlrxl,Wi=rW3rW2()rW1xlD=F[]xl,xl+1,xl+2,xl+3,RO=HD\begin{eqnarray} R&=&r{\left(x_{l},W_{i}\right)}+x_{l}\nonumber\\ r{\left(x_{l},W_{i}\right)} &=& rW_{3}{\left[rW_{2}{\left(rW_{1}{\left(x_{l}\right)}\right)}\right]}\nonumber\\ D &=& F{\left({\left[x_{l},x_{l+1},x_{l+2},x_{l+3},R\right]}\right)}\nonumber\\ O&=&H{\left(D\right)} \end{eqnarray}where O is the dense residual network block output, R and D are the outputs of residual and dense connections respectively, false([xl,xl+1,xl+2,xl+3,R]false)$([x_{l},x_{l+1},x_{l+2},x_{l+3},R])$ represents the concatenation of feature maps in layers false(l,,l+3false)$(l,\ldots ,l+3)$ and residual output R,r$R, r$<...>…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Each DRB block contains three 3 × 3 convolution layers with a batch normalisation layer. Consider an l th convolution layer output xl$x_{l}$ as input of DRB and rfalse(·false)$r(\cdot )$ be a residual mapping function, then each DRB block can be presented mathematically as [123], R=rxl,Wi+xlrxl,Wi=rW3rW2()rW1xlD=F[]xl,xl+1,xl+2,xl+3,RO=HD\begin{eqnarray} R&=&r{\left(x_{l},W_{i}\right)}+x_{l}\nonumber\\ r{\left(x_{l},W_{i}\right)} &=& rW_{3}{\left[rW_{2}{\left(rW_{1}{\left(x_{l}\right)}\right)}\right]}\nonumber\\ D &=& F{\left({\left[x_{l},x_{l+1},x_{l+2},x_{l+3},R\right]}\right)}\nonumber\\ O&=&H{\left(D\right)} \end{eqnarray}where O is the dense residual network block output, R and D are the outputs of residual and dense connections respectively, false([xl,xl+1,xl+2,xl+3,R]false)$([x_{l},x_{l+1},x_{l+2},x_{l+3},R])$ represents the concatenation of feature maps in layers false(l,,l+3false)$(l,\ldots ,l+3)$ and residual output R,r$R, r$<...>…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Zhang et al. [123] utilise GAN to deblur remote sensing images. Nine residual blocks with stride convolution in the bottleneck section is used in the generative network.…”
Section: Related Work and Backgroundmentioning
confidence: 99%
“…2) Deep Learning Models: Deep learning has been extensively used in natural image deblurring [154] and demonstrated its advantage over handcrafted regularizers. Consequently, Zhang et al [155] proposed an end-to-end learnable method based on GANs for HSIs deblurring. However, the CNNbased models are only suitable for several specific types of blurs and have limits against more general spatially varying blurs.…”
Section: ) Sparsity Optimization Modelsmentioning
confidence: 99%
“…A GAN-based methodology is also explored in [128] to deblur degraded remote sensing images in the context of image restoration. A common approach to solve this problem is the incorporation of various priors into a restoration procedure as constrained conditions that often lead to inaccurate results.…”
Section: Restorationmentioning
confidence: 99%
“…A common approach to solve this problem is the incorporation of various priors into a restoration procedure as constrained conditions that often lead to inaccurate results. In contrast, in [128] an end-to-end kernel-free blind deblurring learning method is presented that does not need any prior assumptions for the blurs. During training, a discriminator network is defined, with a gradient penalty which is shown to be robust to the choice of generator architecture, and a perceptual loss which is a simple 2 loss based on the difference of the generated and target image CNN feature maps that focuses on restoring general content instead of texture details.…”
Section: Restorationmentioning
confidence: 99%