In this work, a new two-point method for water-fat imaging is described and explored. It generalizes existing two-point methods by eliminating some of the restrictions that these methods impose on the choice of echo times. Thus, the new two-point method promises to provide more freedom in the selection of protocol parameters and to reach higher scan efficiency. Its performance was studied theoretically and was evaluated experimentally in abdominal imaging with a multigradient-echo sequence. While depending on the choice of echo times, it is generally found to be favorable compared to existing two-point methods. Notably, water images with higher spatial resolution and better signal-to-noise ratio were attained with it in single breathholds at 3.0 T and 1.5 T, respectively. The use of more accurate spectral models of fat is shown to substantially reduce observed variations in the extent of fat suppression. The acquisition of in-and opposedphase images is demonstrated to be replaceable by a synthesis from water and fat images. The new two-point method is finally also applied to autocalibrate a multidimensional eddy current correction and to enhance the fat suppression achieved with three-point methods in this way, especially toward the edges of larger field of views. Magn Reson Med 65:96-107, 2011. V C 2010 Wiley-Liss, Inc. Key words: water-fat separation; fat suppression; Dixon methods; multiecho acquisitions; abdominal imaging; eddy currentsAs hyperintense signal from fat may obscure underlying pathology, its partial or complete suppression is a basic requirement in various applications of magnetic resonance imaging. Its characteristics result from the comparatively short relaxation times and large chemical shifts of the dominant methylene protons and serve as the basis for its elimination.Fat suppression is often an integral part of the acquisition. Popular methods include short-tau inversion recovery, which exploits the specific relaxation times, and selective saturation, which relies on the specific chemical shifts (1,2). However, these methods all have individual drawbacks, such as longer scan times, lower signalto-noise ratio (SNR), higher specific absorption rate, or less tolerance to field inhomogeneities. Postponing the separation of water and fat signals until the reconstruction allows avoiding most of these disadvantages. So-called Dixon methods perform for this purpose measurements at different echo times to encode the chemical shift (3). Besides fat suppression, they also permit efficient water-fat imaging, providing additional diagnostic information of relevance to selected applications.Several Dixon methods have been proposed over the last two decades (4). Apart from different strategies for the separation, they are mainly characterized by the number of echoes, or points, that they sample, and by the constraints that they impose on the echo times. We focus in this work on two-and three-point methods, as multipoint methods are usually very similar to threepoint methods, and one-point methods are gene...
Seismic signals are often irregularly sampled along spatial coordinates, leading to suboptimal processing and imaging results. Least squares estimation of Fourier components is used for the reconstruction of band‐limited seismic signals that are irregularly sampled along one spatial coordinate. A simple and efficient diagonal weighting scheme, based on the distance between the samples, takes the properties of the noise (signal outside the bandwidth) into account in an approximate sense. Diagonal stabilization based on the energies of the signal and the noise ensures robust estimation. Reconstruction for each temporal frequency component allows the specification of a varying spatial bandwidth dependent on the minimum apparent velocity. This parameterization improves the reconstruction capability for the lower temporal frequencies. In practical circumstances, the maximum size of the gaps in which the signal can be reconstructed is three times the (temporal frequency dependent) Nyquist interval. Reconstruction in the wavenumber domain allows a very efficient implementation of the algorithm, and takes a total number of operations a few times that of a 2-D fast Fourier transform corresponding to the size of the output data set. Quality control indicators of the reconstruction procedure can be computed which may also serve as decision criteria on in‐fill shooting during acquisition. The method can be applied to any subset of seismic data with one varying spatial coordinate. Applied along the cross‐line direction, it can be used to compute a 3-D stack with improved anti‐alias protection and less distortion of the signal within the bandwidth.
DUIJNDAM, A.J.W. 1988. Bayesian estimation in seismic inversion. Part I: Principles. Geophysical Prospecting 36,878-898.This paper gives a review of Bayesian parameter estimation. The Bayesian approach is fundamental and applicable to all kinds of inverse problems. Its basic formulation is probabilistic. Information from data is combined with a priori information on model parameters. The result is called the a posteriori probability density function and it is the solution to the inverse problem. In practice an estimate of the parameters is obtained by taking its maximum. Well-known estimation procedures like least-squares inversion or l,-norm inversion result, depending on the type of noise and a priori information given. Due to the a priori information the maximum will be unique and the estimation procedures will be stable except (in theory) for the most pathological problems which are very unlikely to occur in practice. The approach of Tarantola and Valette can be derived within classical probability theory.The Bayesian approach allows a full resolution and uncertainty analysis which is discussed in Part I1 of the paper.
The nonuniform discrete Fourier transform (NDFT) can be computed with a fast algorithm, referred to as the nonuniform fast Fourier transform (NFFT). In L dimensions, the NFFT requires [Formula: see text] operations, where M𝓁 is the number of Fourier components along dimension 𝓁, N is the number of irregularly spaced samples, and ε is the required accuracy. This is a dramatic improvement over the [Formula: see text] operations required for the direct evaluation (NDFT). The performance of the NFFT depends on the lowpass filter used in the algorithm. A truncated Gauss pulse, proposed in the literature, is optimized. A newly proposed filter, a Gauss pulse tapered with a Hanning window, performs better than the truncated Gauss pulse and the B-spline, also proposed in the literature. For small filter length, a numerically optimized filter shows the best results. Numerical experiments for 1-D and 2-D implementations confirm the theoretically predicted accuracy and efficiency properties of the algorithm.
A parameter estimation or inversion procedure is incomplete without an analysis of uncertainties in the results. In the fundamental approach of Bayesian parameter estimation, discussed in Part I of this paper, the a posteriori probability density function (pdf) is the solution to the inverse problem. It is the product of the a priori pdf, containing a priori information on the parameters, and the likelihood function, which represents the information from the data. The maximum of the a posteriori pdf is usually taken as a point estimate of the parameters. The shape of this pdf, however, gives the full picture of uncertainty in the parameters. Uncertainty analysis is strictly a problem of information reduction. This can be achieved in several stages. Standard deviations can be computed as overall uncertainty measures of the parameters, when the shape of the a posteriori pdf is not too far from Gaussian. Covariance and related matrices give more detailed information. An eigenvalue or principle component analysis allows the inspection of essential linear combinations of the parameters. The relative contributions of a priori information and data to the solution can be elegantly studied. Results in this paper are especially worked out for the non‐linear Gaussian case. Comparisons with other approaches are given. The procedures are illustrated with a simple two‐parameter inverse problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.