2022
DOI: 10.1109/tip.2021.3135708
|View full text |Cite
|
Sign up to set email alerts
|

Two-Step Registration on Multi-Modal Retinal Images via Deep Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(32 citation statements)
references
References 66 publications
0
30
0
Order By: Relevance
“…More specifically, considering the case of deep learningbased multimodal retinal image registration, there are works where rigid or deformable transformations are obtained using supervised, weakly supervised, or unsupervised approaches [17][18][19][20][21][22][23]. While deformable methods [19,22,23] are more competitive than rigid ones [17,18,20,21], the former generally require that the input image pair is already approximately registered (usually via an affine transformation). To do the latter, there are two options: incorporate this stage into the methodology itself or assume that this step was previously performed with an external method.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…More specifically, considering the case of deep learningbased multimodal retinal image registration, there are works where rigid or deformable transformations are obtained using supervised, weakly supervised, or unsupervised approaches [17][18][19][20][21][22][23]. While deformable methods [19,22,23] are more competitive than rigid ones [17,18,20,21], the former generally require that the input image pair is already approximately registered (usually via an affine transformation). To do the latter, there are two options: incorporate this stage into the methodology itself or assume that this step was previously performed with an external method.…”
Section: Related Workmentioning
confidence: 99%
“…In relation to the unsupervised methods [18,19,23], they have the advantage of not requiring segmentation knowledge, but in the context of multimodal images, the absence of this type of knowledge tends to reduce the accuracy of the obtained registration. As for the purely supervised methods [17,20], the requirement of a highquality ground-truth can hinder their applicability, especially in a context associated with medical images and their daily use in clinical practice.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Different researchers provided various solutions. Some researchers adopted deep learning in the feature detection process and further aligned the feature points using conventional image alignment methods, such as RANSAC [40,41], while some work constructed an outlier-rejection network to compute the image transformation matrix [45,46]. There is no consensus on how deep learning should be added to the registration pipeline [46].…”
Section: Figurementioning
confidence: 99%
“…Some researchers adopted deep learning in the feature detection process and further aligned the feature points using conventional image alignment methods, such as RANSAC [40,41], while some work constructed an outlier-rejection network to compute the image transformation matrix [45,46]. There is no consensus on how deep learning should be added to the registration pipeline [46]. Moreover, in most deep learningbased registration algorithms, accurate registration relies on accurate segmentation, which is still an ongoing research topic in medical image processing.…”
Section: Figurementioning
confidence: 99%