2020
DOI: 10.1088/1361-6560/ab843e
|View full text |Cite
|
Sign up to set email alerts
|

Deep learning in medical image registration: a review

Abstract: This paper presents a review of deep learning (DL) based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
267
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

5
4

Authors

Journals

citations
Cited by 392 publications
(270 citation statements)
references
References 185 publications
2
267
0
1
Order By: Relevance
“…For instance, in brain MRI scans, T1‐weighted (T1) images show distinguishable white and grey matters, T1‐weighted, and contrast‐enhanced (T1c) images can be used for assessment of the change of tumor shape with its enhanced demarcation around tumor, T2‐weighted (T2) images show fluid obviously from cortical tissue, while contours of lesion can be delineated clearly on fluid‐attenuated inversion recovery (Flair) images 5,6 . Therefore, integrating the strengths of each modality can help exploring rich underlying information of tissue that facilitate diagnosis and treatment management 7–12 . However, in MRI scan, due to limited scan time, incorrect machine settings, scan artifacts and corruption, and patient allergies to contrast agents, it is difficult to apply a unified group of scan sequences to each individual patient even with a similar disease, for example, glioblastoma.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, in brain MRI scans, T1‐weighted (T1) images show distinguishable white and grey matters, T1‐weighted, and contrast‐enhanced (T1c) images can be used for assessment of the change of tumor shape with its enhanced demarcation around tumor, T2‐weighted (T2) images show fluid obviously from cortical tissue, while contours of lesion can be delineated clearly on fluid‐attenuated inversion recovery (Flair) images 5,6 . Therefore, integrating the strengths of each modality can help exploring rich underlying information of tissue that facilitate diagnosis and treatment management 7–12 . However, in MRI scan, due to limited scan time, incorrect machine settings, scan artifacts and corruption, and patient allergies to contrast agents, it is difficult to apply a unified group of scan sequences to each individual patient even with a similar disease, for example, glioblastoma.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning-based DIR methods have been proposed for MRI brain, 21 CT head/neck, 22 CT chest, 23 MR/US prostate, 24 4D-CT lung [25][26][27][28] and so on. 29 Eppenhof et al proposed a supervised convolutional neural network (CNN) using U-Net architecture. 27 They trained their network using synthetic random transformations.…”
Section: Related Workmentioning
confidence: 99%
“…Deep learning‐based DIR methods have been proposed for MRI brain, CT head/neck, CT chest, MR/US prostate, 4D‐CT lung and so on . Eppenhof et al .…”
Section: Related Workmentioning
confidence: 99%
“…However, FE model that consists of geometry meshing, material property assignment, boundary condition definition, and so on usually requires substantial time and labor to build and solve, which prevent FE from being routinely used in the clinic. 16,17,28,29 As artificial intelligence develops, many deep learningbased methods have been proposed for medical image processing such as segmentation, 30 registration, 31 synthesis, 32 and so on. Compared to traditional image processing methods, deep learning-based methods are generally faster and more robust to hyperparameter selection.…”
Section: Introductionmentioning
confidence: 99%