Thermal ablation is a minimally invasive procedure for treating small or unresectable tumors. Although CT is widely used for guiding ablation procedures, the contrast of tumors against surrounding normal tissues in CT images is often poor, aggravating the difficulty in accurate thermal ablation. In this paper, we propose a fast MR-CT image registration method to overlay a pre-procedural MR (pMR) image onto an intra-procedural CT (iCT) image for guiding the thermal ablation of liver tumors. By first using a Cycle-GAN model with mutual information constraint to generate synthesized CT (sCT) image from the corresponding pMR, pre-procedural MR-CT image registration is carried out through traditional mono-modality CT-CT image registration. At the intra-procedural stage, a partial-convolution-based network is first used to inpaint the probe and its artifacts in the iCT image. Then, an unsupervised registration network is used to efficiently align the preprocedural CT (pCT) with the inpainted iCT (inpCT) image. The final transformation from pMR to iCT is obtained by combining the two estimated transformations, i.e., (1) from the pMR image space to the pCT image space (through sCT) and (2) from the pCT image space to the iCT image space (through inpCT). Experimental results confirm that the proposed method achieves high registration accuracy with a very fast computational speed.
We introduce a new multi-atlas segmentation (MAS) framework for MR tumor brain images. The basic idea of MAS is to register and fuse label information from multiple normal brain atlases to a new brain image for segmentation. Many MAS methods have been proposed with success. However, most of them are developed for normal brain images, and tumor brain images usually pose a great challenge for them. This is because tumors cause difficulties in registration of normal brain atlases to the tumor brain image. To address this challenge, in the first step of our MAS framework, a new low-rank method is used to get the recovered image of normal-looking brain from the MR tumor brain image based on the information of normal brain atlases. Different from conventional low-rank methods that produce the recovered image with distorted normal brain regions, our low-rank method harnesses a spatial constraint to get the recovered image with preserved normal brain regions. Then in the second step, normal brain atlases can be registered to the recovered image without influence from tumors. These two steps are iteratively proceeded until convergence, for obtaining the final segmentation of the tumor brain image. During the iteration, both the recovered image and the registration of normal brain atlases to the recovered image are gradually refined. We have compared our proposed method with a state-of-the-art method by using both synthetic and real MR tumor brain images. Experimental results show that our proposed method can get effectively recovered images and also improves segmentation accuracy.
Automatic segmentation of medical images finds abundant applications in clinical studies. Computed Tomography (CT) imaging plays a critical role in diagnostic and surgical planning of craniomaxillofacial (CMF) surgeries as it shows clear bony structures. However, CT imaging poses radiation risks for the subjects being scanned. Alternatively, Magnetic Resonance Imaging (MRI) is considered to be safe and provides good visualization of the soft tissues, but the bony structures appear invisible from MRI. Therefore, the segmentation of bony structures from MRI is quite challenging. In this paper, we propose a cascaded generative adversarial network with deep-supervision discriminator (Deep-supGAN) for automatic bony structures segmentation. The first block in this architecture is used to generate a high-quality CT image from an MRI, and the second block is used to segment bony structures from MRI and the generated CT image. Different from traditional discriminators, the deep-supervision discriminator distinguishes the generated CT from the ground-truth at different levels of feature maps. For segmentation, the loss is not only concentrated on the voxel level but also on the higher abstract perceptual levels. Experimental results show that the proposed method generates CT images with clearer structural details and also segments the bony structures more accurately compared with the state-of-the-art methods.
In this paper, we propose an efficient framework for parcellation of white matter tractograms using discriminative dictionary learning. Key to our framework is the learning of a compact dictionary for each fiber bundle so that the streamlines within the bundle can be sufficiently represented. Dictionaries for multiple bundles are combined for whole-brain tractogram representation. These dictionaries are learned jointly to encourage inter-bundle incoherence for discriminative power. The proposed method allows tractograms to be assigned to more than one bundle, catering to scenarios where tractograms cannot be clearly separated. Experiments on a bundle-labeled HCP dataset and an infant dataset highlight the ability of our framework in grouping streamlines into anatomically plausible bundles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.