2022
DOI: 10.3390/rs14122811
|View full text |Cite
|
Sign up to set email alerts
|

Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure

Abstract: The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 79 publications
0
1
0
Order By: Relevance
“…However, we think that the high response locations obtained by these Harris-like operators can only be considered 'salient' in a very limited local neighborhood, about 10 × 10 pixels for the normally applied parameter setting. It does not guarantee the saliency of the local image template, which is usually 100 × 100 to 200 × 200 pixels [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. To this end, a deep learning-based feature point detector is proposed in [29], which uses a convolutional network to assess the 'goodness' of the local image patches for template matching.…”
Section: Salient Sparse Feature Point Detectionmentioning
confidence: 99%
See 2 more Smart Citations
“…However, we think that the high response locations obtained by these Harris-like operators can only be considered 'salient' in a very limited local neighborhood, about 10 × 10 pixels for the normally applied parameter setting. It does not guarantee the saliency of the local image template, which is usually 100 × 100 to 200 × 200 pixels [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30]. To this end, a deep learning-based feature point detector is proposed in [29], which uses a convolutional network to assess the 'goodness' of the local image patches for template matching.…”
Section: Salient Sparse Feature Point Detectionmentioning
confidence: 99%
“…Current studies on optical-SAR registration, no matter the handcrafted methods [7][8][9][10][11][12][13][14][15][16][17][18][19] or the deep learning-based ones [20][21][22][23][24][25][26][27][28][29][30], mainly focus on dealing with the vast radiometric and geometric disparity problem, which makes it quite difficult to obtain sufficient reliable CPs that are sparsely distributed across the input image pairs. After the putative CPs are obtained, outlier removal and image warping are mostly conducted under the assumption that the geometric relationship between the input optical-SAR image pairs can be depicted by a linear equation, such as the affine or projective transformation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To mitigate the feature differences, several scholars have proposed image transformation-based methods. For instance, Maggiolo et al [25] employed a conditional GAN-based generation strategy to convert optical images into SAR images and then conducted a template matching between the GAN-generated SAR and real SAR images. Huang et al [26] utilized a CycleGAN network structure to transform SAR images into pseudooptical images and registered them with real optical images.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, inter-modality translation has received considerable attention in the remote sensing community. In this sense, several approaches for transforming images acquired by an EO sensor into images that follow the characteristics of another type of EO sensor have been developed for multimodal SITS analysis [9], bi-temporal change detection [3,10,11], and registration of multi-sensor images [12].…”
Section: Introductionmentioning
confidence: 99%