2019 IEEE Winter Conference on Applications of Computer Vision (WACV) 2019
DOI: 10.1109/wacv.2019.00062
|View full text |Cite
|
Sign up to set email alerts
|

Matching Disparate Image Pairs Using Shape-Aware ConvNets

Abstract: An end-to-end trainable ConvNet architecture, that learns to harness the power of shape representation for matching disparate image pairs, is proposed. Disparate image pairs are deemed those that exhibit strong affine variations in scale, viewpoint and projection parameters accompanied by the presence of partial or complete occlusion of objects and extreme variations in ambient illumination. Under these challenging conditions, neither local nor global feature-based image matching methods, when used in isolatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 33 publications
(116 reference statements)
0
4
0
Order By: Relevance
“…Supervised deep learning-based techniques estimate the correspondences between pair of disparate natural scene images [13,23,25] by using benchmark datasets of disparate natural scene images with known ground-truth correspondence for training. However, ground-truth correspondences are unknown in fetoscopic videos.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…Supervised deep learning-based techniques estimate the correspondences between pair of disparate natural scene images [13,23,25] by using benchmark datasets of disparate natural scene images with known ground-truth correspondence for training. However, ground-truth correspondences are unknown in fetoscopic videos.…”
Section: Introductionmentioning
confidence: 99%
“…However, ground-truth correspondences are unknown in fetoscopic videos. Moreover, [13] and [25] used pair of high-resolution natural scene images which are sharp and rich in both texture and color contrast, contrary to fetoscopic videos which are of low resolution, lack both texture and color contrast since the in vivo scene is monotonic in color, and are unsharp because of the introduced averaging to compensate for the honeycomb effect of the fiber bundle scope. As a result, hand-crafted feature-based methods perform poorly on the fetoscopic videos.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations