Nonrigid local image registration plays an important role in medical imaging. In this paper we focus on demon registration which is introduced by Thirion [1], and is comparable to fluid registration. Because demon registration cannot deal with multiple MRI modalities, we introduce a MRI modality transformation which changes the representation of a T1 scan into a T2 scan using the peaks in a joint histogram. We compare the performance between demon registration with modality transformation, demon registration with gradient images and Rueckerts [2] B-spline based free form deformation method in combination with mutual information. For this test we use perfectly aligned T1 and T2 slices from the BrainWeb database [3], which we local spherically distort. In conclusion demon registration with modality transformation gives the smallest registration errors, in case of local large spherical distortions and small bias fields.
Purpose: To present a semi-automatic deformable registration algorithm for co-registering T2-weighted (T2w) images of the prostate with whole-mount pathological sections of prostatectomy specimens. Materials and Methods:Twenty-four patients underwent 1.5 Tesla (T) endorectal MR imaging before radical prostatectomy with whole-mount step-section pathologic analysis of surgical specimens. For each patient, the T2w imaging containing the largest area of tumor was manually matched with the corresponding pathologic slice. The prostate was co-registered using a free-form deformation (FFD) algorithm based on B-splines. Registration quality was assessed through differences between prostate diameters measured in right-left (RL) and anteroposterior (AP) directions on T2w images and pathologic slices and calculation of the Dice similarity coefficient, D, for the whole prostate (WP), the peripheral zone (PZ) and the transition zone (TZ).Results: The mean differences in diameters measured on pathology and MR imaging in the RL direction and the AP direction were 0.49 cm and À0.63 cm, respectively, before registration and 0.10 cm and À0.11 cm, respectively, after registration. The mean D values for the WP, PZ and TZ, were 0.76, 0.65, and 0.77, respectively, before registration and increased to 0.91, 0.76, and 0.85, respectively, after registration. The improvements in D were significant for all three tissues (P < 0.001 for all). Conclusion:The proposed semi-automatic method enabled successful co-registration of anatomical prostate MR images to pathologic slices.
Abstract. Cone-beam computed tomography (CBCT) is an important image modality for dental surgery planning, with high resolution images at a relative low radiation dose. In these scans the mandibular canal is hardly visible, this is a problem for implant surgery planning. We use anisotropic diffusion filtering to remove noise and enhance the mandibular canal in CBCT scans. For the diffusion tensor we use hybrid diffusion with a continuous switch (HDCS), suitable for filtering both tubular as planar image structures. We focus in this paper on the diffusion discretization schemes. The standard scheme shows good isotropic filtering behavior but is not rotational invariant, the diffusion scheme of Weickert is rotational invariant but suffers from checkerboard artifacts. We introduce a new scheme, in which we numerically optimize the image derivatives. This scheme is rotational invariant and shows good isotropic filtering properties on both synthetic as real CBCT data.
Multi modal image registration enables images from different modalities to be analyzed in the same coordinate system. The class of B-spline-based methods that maximize the Mutual Information between images produce satisfactory result in general, but are often complex and can converge slowly. The popular Demons algorithm, while being fast and easy to implement, produces unrealistic deformation fields and is sensitive to illumination differences between the two images, which makes it unsuitable for multi-modal registration in its original form.We propose a registration algorithm that combines a B-spline grid with deformations driven by image forces. The algorithm is easy to implement and is robust against large differences in the appearance between the images to register. The deformation is driven by attraction-forces between the edges in both images, and a B-spline grid is used to regularize the sparse deformation field. The grid is updated using an original approach by weighting the deformation forces for each pixel individually with the edge strengths. This approach makes the algorithm perform well even if not all corresponding edges are present.We report preliminary results by applying the proposed algorithm to a set of (multi-modal) test images. The results show that the proposed method performs well, but is less accurate than state of the art registration methods based on Mutual Information. In addition, the algorithm is used to register test images to manually drawn line images in order to demonstrate the algorithm's robustness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.