2018
DOI: 10.1109/jstars.2018.2803212
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching

Abstract: This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any products or services.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
64
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 129 publications
(78 citation statements)
references
References 29 publications
0
64
0
Order By: Relevance
“…The experimental results showed a good performance of the network even without a priori known features. In a similar manner, [5] demonstrated how this form of image-to-image translation can help to ease the coregistration of SAR and optical images for an improvement of the geolocation of optical data. Although the training was experienced to be quite expensive, and although it remains difficult to evaluate artificial, GAN-generated images by standard metrics, visual inspection and comparison to standard similarity measures such as NCC, BRISK and SIFT show the great potential of this approach.…”
Section: Related Workmentioning
confidence: 91%
See 1 more Smart Citation
“…The experimental results showed a good performance of the network even without a priori known features. In a similar manner, [5] demonstrated how this form of image-to-image translation can help to ease the coregistration of SAR and optical images for an improvement of the geolocation of optical data. Although the training was experienced to be quite expensive, and although it remains difficult to evaluate artificial, GAN-generated images by standard metrics, visual inspection and comparison to standard similarity measures such as NCC, BRISK and SIFT show the great potential of this approach.…”
Section: Related Workmentioning
confidence: 91%
“…This paper addresses one possible destination point, using an adapted conditional generative adversarial network, taking domain-specific potentials and peculiarities into specific consideration. The discipline of conditional adversarial networks (cGANs) offers promising strategies for generating artificial images and has already successfully been adapted to tasks in multi-sensor remote sensing, e.g., [5][6][7]. Considering the current state-of-the-art, this paper investigates the utilization of an adapted version of the CycleGAN architecture [8] for the SAR-to-optical image translation task and the value of domain knowledge (e.g., initialization, sensor/image characteristics) for improving results.…”
Section: Introductionmentioning
confidence: 99%
“…Taking a different approach to the problem, (Merkle et al, 2018) proposed the use of a generative adversarial network (GAN) to generate SAR-like templates from optical image patches. These templates were then used as input to standard template matching approaches such as mutual information (MI) or normalized cross correlation (NCC).…”
Section: Deep Learning For Sar-optical Matchingmentioning
confidence: 99%
“…The coupled optical and SAR patches for different sources are then automatically extracted by the learning features in the pretrained network Pseudo-Siamese CNN [47] and generative matching network (GMN) [48], respectively. At the same time, in [49], the corresponding SAR-like images are constructed via conditional generative adversarial networks (cGANs). In this regard, improving the accuracy of ground control points selection proves to be effective.…”
Section: Training Algorithms In B_cnn and Trans_cnnmentioning
confidence: 99%