2019
DOI: 10.3390/s19214675
|View full text |Cite
|
Sign up to set email alerts
|

Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighborhood Descriptor

Abstract: The non-rigid multi-modal three-dimensional (3D) medical image registration is highly challenging due to the difficulty in the construction of similarity measure and the solution of non-rigid transformation parameters. A novel structural representation based registration method is proposed to address these problems. Firstly, an improved modality independent neighborhood descriptor (MIND) that is based on the foveated nonlocal self-similarity is designed for the effective structural representations of 3D medica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
6
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 38 publications
0
6
1
Order By: Relevance
“…erefore, lung CT image registration algorithm was established based on OMSD, and FFD and MSD were introduced for comparison with OMSD. e results showed that the SSD of OMSD was steeply lower than that of MSD and FFD (P < 0.05), which was different with the research results of Yang et al [14] due to the different performances of registration algorithms. e quality of CT image registered by OMSD was obviously better than that of MSD and FFD.…”
Section: Discussioncontrasting
confidence: 85%
“…erefore, lung CT image registration algorithm was established based on OMSD, and FFD and MSD were introduced for comparison with OMSD. e results showed that the SSD of OMSD was steeply lower than that of MSD and FFD (P < 0.05), which was different with the research results of Yang et al [14] due to the different performances of registration algorithms. e quality of CT image registered by OMSD was obviously better than that of MSD and FFD.…”
Section: Discussioncontrasting
confidence: 85%
“…Our method does not avoid the extraction of handcrafted features, the matching design, and the similarity measure, and it also uses extensive clinical data that has not been annotated by medical experts. In addition, our DIR method uses the Siamese spatial transformer network to obtain the non-rigid transformation parameters more accurately than other methods and uses backpropagation to continuously optimize the similarity of the paired pre- and post-ablative images to minimize the distance between them ( 28 30 ). The proposed registration method can make full use of the advantages of deep neural networks to achieve better registration performance than previous methods.…”
Section: Discussionmentioning
confidence: 99%
“…The techniques used to identify abnormalities that are claimed to be potentially malignant or pre-malignant expose a higher capability to spot them due to the improved image quality parameters. Furthermore, the achievements have been validated by PSNR and IEM metrics [45] that can measure medical image quality with great competence. In more extended studies numerous scanning techniques were investigated.…”
Section: Discussionmentioning
confidence: 99%
“…All the most commonly applied image quality metrics, including the IEM, confirmed the weight of the results. The proposed algorithm was compared with 4 state-of-the-art SRRs: Non-Rigid Multi-Modal 3D Medical Image Registration Based on Foveated Modality Independent Neighbourhood Descriptor [45], Residual dense network for image super-resolution [15], Enhanced deep residual networks for single image super-resolution [14] and Image super-resolution using very deep residual channel attention networks [16]. L1-cost regularisation is applied to learn the nets of the following procedures-Enhanced deep residual networks for single image super-resolution, Residual dense network for image super-resolution and Image super-resolution using very deep residual channel attention networks.…”
Section: Discussionmentioning
confidence: 99%