2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298707
|View full text |Cite
|
Sign up to set email alerts
|

A low-dimensional step pattern analysis algorithm with application to multimodal retinal image registration

Abstract: Existing feature descriptor-based methods on retinal image registration are mainly based on scale-invariant feature transform (SIFT) or partial intensity invariant feature descriptor (PIIFD). While these descriptors are often being exploited, they do not work very well upon unhealthy multimodal images with severe diseases. Additionally, the descriptors demand high dimensionality to adequately represent the features of interest. The higher the dimensionality, the greater the consumption of resources (e.g. memor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
21
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 33 publications
0
21
0
Order By: Relevance
“…On the other hand, the application of generic local shape descriptors is also limited by the differences among retinal image modalities, and usually requires preprocessing for multimodal scenarios [7]. Some proposals solve this issue with the design of domain-specific descriptors [8] [2], but they still rely on non-specific methods for the detection of interest points. The use of algorithms that aim at detecting common retinal structures can provide more representative and repeatable characteristics.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, the application of generic local shape descriptors is also limited by the differences among retinal image modalities, and usually requires preprocessing for multimodal scenarios [7]. Some proposals solve this issue with the design of domain-specific descriptors [8] [2], but they still rely on non-specific methods for the detection of interest points. The use of algorithms that aim at detecting common retinal structures can provide more representative and repeatable characteristics.…”
Section: Introductionmentioning
confidence: 99%
“…Supervised evaluation metrics require manually labeled keypoint correspondences between the mutimodal images. Popular metrics includes maximum error (MAE) [10], [11], [17], [18], median error (MEE) [10], [11], [17], [18], root mean square error (RMSE) [11], [12], [17], [18], and percentage of correct keypoints (PCK) [23].…”
Section: B Objectivementioning
confidence: 99%
“…(4) The choice of threshold T is task dependent. For retinal image registration tasks, RMSE less 5 pixels is usually considered as success registration [11], [12], [17], [18], and the threshold T can be set to 5 pixel. To compute these metrics, we need to first manually label pairs of keypoint correspondences (generally 6 or more [10]- [12], [17], [18]) for all the multimodal images, where the keypoint locations should accurately lie on salient landmarks like vessel bifurcations, and uniformly distributed in the overlapping area.…”
Section: B Objectivementioning
confidence: 99%
See 1 more Smart Citation
“…We made our contribution by proposing a simplistic scheme that can decide the best transformation for each image pair, and have show in our results improvements for a considerable number of methods with insignificant increase in the computational effort. As future work, we are currently adapting our method to address multimodal fundus images registration (GHASSABI et al, 2013;WANG et al, 2015;LEE et al, 2015). Originally, VOTUS was not developed to deal with multimodal images, however, we achieve promising results with a slight change in our method.…”
Section: Discussionmentioning
confidence: 95%