In this paper we present a benchmarking framework for the validation of cardiac motion analysis algorithms. The reported methods are the response to an open challenge that was put to the medical imaging community through a MICCAI workshop. The database included magnetic resonance (MR) and 3D ultrasound (3DUS) datasets from a dynamic phantom and 15 healthy volunteers. Participants processed 3D tagged MR datasets (3DTAG), cine steady state free precession MR datasets (SSFP) and 3DUS datasets, amounting to 1158 image volumes. Ground-truth for motion tracking was based on 12 landmarks (4 walls at 3 ventricular levels). They were manually tracked by two observers in the 3DTAG data over the whole cardiac cycle, using an in-house application with 4D visualization capabilities. The median of the inter-observer variability was computed for the phantom dataset (0.77mm) and for the volunteer datasets (0.84mm). The ground-truth was registered to 3DUS coordinates using a point based similarity transform. Four institutions responded to the challenge by providing motion estimates for the data: Fraunhofer MEVIS (MEVIS), Bremen, Germany; Imperial College London -University College London (IUCL), UK; Universitat Pompeu Fabra (UPF), Barcelona, Spain; Inria-Asclepios project (INRIA), France. Details on the implementation and evaluation of the four methodologies are presented in this manuscript. The manually tracked landmarks were used to evaluate tracking accuracy of all methodologies. For 3DTAG, median values were computed over all time frames for the phantom dataset (MEVIS=1.20mm, IUCL=0.73mm, UPF=1.10mm, INRIA=1.09mm) and for the volunteer datasets (MEVIS=1.33mm, IUCL=1.52mm, UPF=1.09mm, INRIA=1.32mm). For 3DUS, median values were computed at end diastole and end systole for the phantom dataset (MEVIS=4.40mm, UPF=3.48mm, INRIA=4.78mm) and for the volunteer datasets (MEVIS=3.51mm, UPF=3.71mm, INRIA=4.07mm). For SSFP, median values were computed at end diastole and end systole for the phantom dataset (UPF=6.18mm, INRIA=3.93mm) and for the volunteer datasets (UPF=3.09mm, INRIA=4.78mm). Finally, strain curves were generated and qualitatively compared. Good agreement was found between the different modalities and methodologies, except for radial strain that showed a high variability in cases of lower image quality.
We propose to learn a low-dimensional probabilistic deformation model from data which can be used for registration and the analysis of deformations. The latent variable model maps similar deformations close to each other in an encoding space. It enables to compare deformations, generate normal or pathological deformations for any new image or to transport deformations from one image pair to any other image.Our unsupervised method is based on variational inference. In particular, we use a conditional variational autoencoder (CVAE) network and constrain transformations to be symmetric and diffeomorphic by applying a differentiable exponentiation layer with a symmetric loss function. We also present a formulation that includes spatial regularization such as diffusion-based filters. Additionally, our framework provides multi-scale velocity field estimations. We evaluated our method on 3-D intra-subject registration using 334 cardiac cine-MRIs. On this dataset, our method showed state-of-the-art performance with a mean DICE score of 81.2% and a mean Hausdorff distance of 7.3mm using 32 latent dimensions compared to three state-of-the-art methods while also demonstrating more regular deformation fields. The average time per registration was 0.32s. Besides, we visualized the learned latent space and show that the encoded deformations can be used to transport deformations and to cluster diseases with a classification accuracy of 83% after applying a linear projection.
Tracking soft tissues in medical images using non-linear image registration algorithms requires methods that are fast and provide spatial transformations consistent with the biological characteristics of the tissues. LogDemons algorithm is a fast non-linear registration method that computes diffeomorphic transformations parameterised by stationary velocity fields. Although computationally efficient, its use for tissue tracking has been limited because of its ad-hoc Gaussian regularisation, which hampers the implementation of more biologically motivated regularisations. In this work, we improve the logDemons by integrating elasticity and incompressibility for soft-tissue tracking. To that end, a mathematical justification of demons Gaussian regularisation is proposed. Building on this result, we replace the Gaussian smoothing by an efficient elasticlike regulariser based on isotropic differential quadratic forms of vector fields. The registration energy functional is finally minimised under the divergence-free constraint to get incompressible deformations. As the elastic regulariser and the constraint are linear, the method remains computationally tractable and easy to implement. Tests on synthetic incompressible deformations showed that our approach outperforms the original logDemons in terms of elastic incompressible deformation recovery without reducing the image matching accuracy. As an application, we applied the proposed algorithm to estimate 3D myocardium strain on clinical cine MRI of two adult patients. Results showed that incompressibility constraint improves the cardiac motion re-T. Mansi ( ) · X. Pennec · M. Sermesant · H. Delingette · N. Ayache covery when compared to the ground truth provided by 3D tagged MRI.
Automatic parsing of anatomical objects in X-ray images is critical to many clinical applications in particular towards image-guided invention and workflow automation. Existing deep network models require a large amount of labeled data. However, obtaining accurate pixelwise labeling in X-ray images relies heavily on skilled clinicians due to the large overlaps of anatomy and the complex texture patterns. On the other hand, organs in 3D CT scans preserve clearer structures as well as sharper boundaries and thus can be easily delineated. In this paper, we propose a novel model framework for learning automatic X-ray image parsing from labeled CT scans. Specifically, a Dense Image-to-Image network (DI2I) for multi-organ segmentation is first trained on Xray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT volumes. Then we introduce a Task Driven Generative Adversarial Network (TD-GAN) architecture to achieve simultaneous style transfer and parsing for unseen real X-ray images. TD-GAN consists of a modified cycle-GAN substructure for pixel-to-pixel translation between DRRs and X-ray images and an added module leveraging the pre-trained DI2I to enforce segmentation consistency. The TD-GAN framework is general and can be easily adapted to other learning tasks. In the numerical experiments, we validate the proposed model on 815 DRRs and 153 topograms. While the vanilla DI2I without any adaptation fails completely on segmenting the topograms, the proposed model does not require any topogram labels and is able to provide a promising average dice of 85% which achieves the same level accuracy of supervised training (88%).Disclaimer: This feature is based on research, and is not commercially available. Due to regulatory reasons its future availability cannot be guaranteed.
Robust image registration in medical imaging is essential for comparison or fusion of images, acquired from various perspectives, modalities or at different times. Typically, an objective function needs to be minimized assuming specific a priori deformation models and predefined or learned similarity measures. However, these approaches have difficulties to cope with large deformations or a large variability in appearance. Using modern deep learning (DL) methods with automated feature design, these limitations could be resolved by learning the intrinsic mapping solely from experience. We investigate in this paper how DL could help organ-specific (ROI-specific) deformable registration, to solve motion compensation or atlas-based segmentation problems for instance in prostate diagnosis. An artificial agent is trained to solve the task of non-rigid registration by exploring the parametric space of a statistical deformation model built from training data. Since it is difficult to extract trustworthy ground-truth deformation fields, we present a training scheme with a large number of synthetically deformed image pairs requiring only a small number of real inter-subject pairs. Our approach was tested on inter-subject registration of prostate MR data and reached a median DICE score of .88 in 2-D and .76 in 3-D, therefore showing improved results compared to state-of-the-art registration algorithms.
Cardiac remodelling plays a crucial role in heart diseases. Analyzing how the heart grows and remodels over time can provide precious insights into pathological mechanisms, eventually resulting in quantitative metrics for disease evaluation and therapy planning. This study aims to quantify the regional impacts of valve regurgitation and heart growth upon the end-diastolic right ventricle (RV) in patients with tetralogy of Fallot, a severe congenital heart defect. The ultimate goal is to determine, among clinical variables, predictors for the RV shape from which a statistical model that predicts RV remodelling is built. Our approach relies on a forward model based on currents and a diffeomorphic surface registration algorithm to estimate an unbiased template. Local effects of RV regurgitation upon the RV shape were assessed with Principal Component Analysis (PCA) and cross-sectional multivariate design. A generative 3-D model of RV growth was then estimated using partial least squares (PLS) and canonical correlation analysis (CCA). Applied on a retrospective population of 49 patients, cross-effects between growth and pathology could be identified. Qualitatively, the statistical findings were found realistic by cardiologists. 10-fold cross-validation demonstrated a promising generalization and stability of the growth model. Compared to PCA regression, PLS was more compact, more precise and provided better predictions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.