2022
DOI: 10.1109/tgrs.2021.3126939
|View full text |Cite
|
Sign up to set email alerts
|

Explore Better Network Framework for High-Resolution Optical and SAR Image Matching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(37 citation statements)
references
References 50 publications
0
37
0
Order By: Relevance
“…To qualitatively evaluate the structural features of the pseudo-optical images generated by our network, inspired by the research in [61], a fast Fourier transform (FFT)-accelerated sum of squared differences (SSD) method is used to measure the similarity between the structural features of pseudo-optical images obtained via different methods and those of real optical images. The value of the SSD score plot indicates the offset between image pairs, and a smaller value indicates a higher similarity of their features [62]. In addition, it is noted that the SSD score map obtained with the maximum offset set to 8 pixels has dimensions of 17 × 17.…”
Section: Discussionmentioning
confidence: 99%
“…To qualitatively evaluate the structural features of the pseudo-optical images generated by our network, inspired by the research in [61], a fast Fourier transform (FFT)-accelerated sum of squared differences (SSD) method is used to measure the similarity between the structural features of pseudo-optical images obtained via different methods and those of real optical images. The value of the SSD score plot indicates the offset between image pairs, and a smaller value indicates a higher similarity of their features [62]. In addition, it is noted that the SSD score map obtained with the maximum offset set to 8 pixels has dimensions of 17 × 17.…”
Section: Discussionmentioning
confidence: 99%
“…The area-based methods register the image pairs using a similarity metric function, while the feature-based methods include four processes: feature extraction, feature matching, transformation model estimation and image re-sampling and warping. As deep learning has great potential in feature extraction, numerous researchers have designed data-driven strategy in the field of cross-modal image alignment [35,20,4]. Although image alignment is a necessary step in many fields, it brings extra time consumption and cannot completely address the weakly misalignment problem.…”
Section: Cross-modal Image Alignmentmentioning
confidence: 99%
“…Figures [6][7][8][9][10][11] show the corresponding points matched by the proposed method and other four matching networks on OS data. A yellow line represents a pair of correct matched points, while a red line represents false matching.…”
Section: Qualitative Experimentsmentioning
confidence: 99%
“…Due to different imaging principles, there are nonlinear radiometric and geometric differences between optical and SAR images. Furthermore, speckle noise on SAR images seriously affects matching performance, which makes it very challenging to find the conjugate features for matching of optical and SAR images [6]. For optical and SAR image matching, the difficulty lies in constructing robust features from heterogeneous image pairs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation