OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. Abstract-This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.
This paper introduces a robust 2D cardiac motion estimation method. The problem is formulated as an energy minimization with an optical flow-based data fidelity term and two regularization terms imposing spatial smoothness and sparsity of the motion field in an appropriate cardiac motion dictionary. Robustness to outliers, such as imaging artefacts and anatomical motion boundaries, is introduced using robust weighting functions for the data fidelity term as well as for the spatial and sparse regularizations. The motion fields and the weights are computed jointly using an iteratively re-weighted minimization strategy. The proposed robust approach is evaluated on synthetic data and realistic simulation sequences with available ground-truth by comparing the performance with state-of-the-art algorithms. Finally, the proposed method is validated using two sequences of in vivo images. The obtained results show the interest of the proposed approach for 2D cardiac ultrasound imaging.
This paper investigates a new method for cardiac motion estimation in 2D ultrasound images. The motion estimation problem is formulated as an energy minimization with spatial and sparse regularizations. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated in terms of motion estimation and strain accuracy and compared with state-of-the-art algorithms using a dataset of realistic simulations. These simulation results show that the proposed method provides very promising results for myocardial motion estimation.
This paper introduces a 2D optical flow estimation method for cardiac ultrasound imaging based on a sparse representation. The optical flow problem is regularized using a classical gradient-based smoothness term combined with a sparsity inducing regularization that uses a learned cardiac flow dictionary. A particular emphasis is put on the influence of the spatial and sparse regularizations on the optical flow estimation problem. A comparison with state-of-the-art methods using realistic simulations shows the competitiveness of the proposed method for cardiac motion estimation in ultrasound images.
Robust cardiac motion estimation with dictionary learning and temporal regularization for ultrasound imaging.
We address the multi-focus image fusion problem, where multiple images captured with different focal settings are to be fused into an all-in-focus image of higher quality. Algorithms for this problem necessarily admit the source image characteristics along with focused and blurred features. However, most sparsity-based approaches use a single dictionary in focused feature space to describe multifocus images, and ignore the representations in blurred feature space. We propose a multi-focus image fusion approach based on sparse representation using a coupled dictionary. It exploits the observations that the patches from a given training set can be sparsely represented by a couple of overcomplete dictionaries related to the focused and blurred categories of images and that a sparse approximation based on such coupled dictionary leads to a more flexible and therefore better fusion strategy than the one based on just selecting the sparsest representation in the original image estimate. In addition, to improve the fusion performance, we employ a coupled dictionary learning approach that enforces pairwise correlation between atoms of dictionaries learned to represent the focused and blurred feature spaces. We also discuss the advantages of the fusion approach based on coupled dictionary learning, and present efficient algorithms for fusion based on coupled dictionary learning. Extensive experimental comparisons with state-of-the-art multi-focus image fusion algorithms validate the effectiveness of the proposed approach. Index TermsSparse representations, coupled dictionary learning, image fusion, multi-focus image.The authors are with Aalto University, Dept. Signal Processing and Acoustics, FI-00076, AALTO, Finland. E-mails: farshad.ghorbaniveshki@aalto.fi and svor@ieee.org
Registration of multi-modal medical images is an essential pre-processing step, for example, for fusion or image guided-interventions. However, the alignment process is prone to high variability in tissue appearance between modalities, in addition to local intensity variations and artefacts. This work introduces a robust multi-modal registration approach that mitigates the undesirable effect of such variability. Robustness is achieved using Huber's loss function for the data fidelity and regularization terms. We propose a novel approach using Huber's criterion, which enables a jointly convex estimation of the motions and the associated scale parameters. We formulate the problem as a complex 2D transformation estimation and investigate a robust total-variation smoothing, as well as a dictionary learning-based data fidelity term. Experiments are conducted using two datasets of multi-contrast MR brain images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.