We generalize the primal-dual hybrid gradient (PDHG) algorithm proposed by Zhu and Chan in [An Efficient Primal-Dual Hybrid Gradient Algorithm for Total Variation Image Restoration, CAM Report 08-34, UCLA, Los Angeles, CA, 2008] to a broader class of convex optimization problems. In addition, we survey several closely related methods and explain the connections to PDHG. We point out convergence results for a modified version of PDHG that has a similarly good empirical convergence rate for total variation (TV) minimization problems. We also prove a convergence result for PDHG applied to TV denoising with some restrictions on the PDHG step size parameters. We show how to interpret this special case as a projected averaged gradient method applied to the dual functional. We discuss the range of parameters for which these methods can be shown to converge. We also present some numerical comparisons of these algorithms applied to TV denoising, TV deblurring, and constrained l1 minimization problems.
A collaborative convex framework for factoring a data matrix X into a nonnegative product AS , with a sparse coefficient matrix S, is proposed. We restrict the columns of the dictionary matrix A to coincide with certain columns of the data matrix X, thereby guaranteeing a physically meaningful dictionary and dimensionality reduction. We use l(1, ∞) regularization to select the dictionary from the data and show that this leads to an exact convex relaxation of l(0) in the case of distinct noise-free data. We also show how to relax the restriction-to- X constraint by initializing an alternating minimization approach with the solution of the convex model, obtaining a dictionary close to but not necessarily in X. We focus on applications of the proposed framework to hyperspectral endmember and abundance identification and also show an application to blind source separation of nuclear magnetic resonance data.
Demixing problems in many areas such as hyperspectral imaging and differential optical absorption spectroscopy (DOAS) often require finding sparse nonnegative linear combinations of dictionary elements that match observed data. We show how aspects of these problems, such as misalignment of DOAS references and uncertainty in hyperspectral endmembers, can be modeled by expanding the dictionary with grouped elements and imposing a structured sparsity assumption that the combinations within each group should be sparse or even 1-sparse. If the dictionary is highly coherent, it is difficult to obtain good solutions using convex or greedy methods, such as non-negative least squares (NNLS) or orthogonal matching pursuit. We use penalties related to the Hoyer measure, which is the ratio of the l 1 and l 2 norms, as sparsity penalties to be added to the objective in NNLS-type models. For solving the resulting nonconvex models, we propose a scaled gradient projection algorithm that requires solving a sequence of strongly convex quadratic programs. We discuss its close connections to convex splitting methods and difference of convex programming. We also present promising numerical results for example DOAS analysis and hyperspectral demixing problems.
The ratio of l 1 and l 2 norms has been used empirically to enforce sparsity of scale invariant solutions in non-convex blind source separation problems such as nonnegative matrix factorization and blind deblurring. In this paper, we study the mathematical theory of the sparsity promoting properties of the ratio metric in the context of basis pursuit via over-complete dictionaries. Due to the coherence in the dictionary elements, convex relaxations such as l 1 minimization or non-negative least squares may not find the sparsest solutions. We found sufficient conditions on the nonnegative solutions of the basis pursuit problem so that the sparsest solutions can be recovered exactly by minimizing the nonconvex ratio penalty. Similar results hold for the difference of l 1 and l 2 norms. In the unconstrained form of the basis pursuit problem, these penalties are robust and help select sparse if not the sparsest solutions. We give analytical and numerical examples and introduce sequentially convex algorithms to illustrate how the ratio and difference penalties are computed to produce both stable and sparse solutions.AMS 2000 subject classifications: 94A12, 94A15, 90C26, 90C25.
We propose an extended full-waveform inversion formulation that includes general convex constraints on the model. Though the full problem is highly nonconvex, the overarching optimization scheme arrives at geologically plausible results by solving a sequence of relaxed and warm-started constrained convex subproblems. The combination of box, total-variation, and successively relaxed asymmetric total-variation constraints allows us to steer free from parasitic local minima while keeping the estimated physical parameters laterally continuous and in a physically realistic range. For accurate starting models, numerical experiments carried out on the challenging 2004 BP velocity benchmark demonstrate that bound and total-variation constraints improve the inversion result significantly by removing inversion artifacts, related to source encoding, and by clearly improved delineation of top, bottom, and flanks of a high-velocity high-contrast salt inclusion. The experiments also show that for poor starting models these two constraints by themselves are insufficient to detect the bottom of high-velocity inclusions such as salt. Inclusion of the one-sided asymmetric total-variation constraint overcomes this issue by discouraging velocity lows to buildup during the early stages of the inversion. To the author's knowledge the presented algorithm is the first to successfully remove the imprint of local minima caused by poor starting models and band-width limited finite aperture data. † John "Ernie" Esser passed away on March 8, 2015 while preparing this manuscript. The original is posted here: https://www.slim.eos.ubc.ca/content/total-variation-regularization-strategies-full-waveform-inversion-improving-robustness-noise. arXiv:1608.06159v1 [math.OC]
Given appropriate data acquisition, processing to remove nonprimary arrivals, and use of an accurate migration algorithm, it is the quality of the subsurface velocity model that typically controls the quality of imaging that can be obtained from salt-affected seismic data. Full-waveform inversion has the potential to improve the accuracy, resolution, repeatability, and speed with which such velocity models can be generated, but, in the absence of an accurate starting model, that potential is difficult to realize in practice. Presented are successful inversion results, obtained from synthetic subsalt models, using a robust full-waveform inversion code that includes constraints upon the set of allowable earth models. These constraints include limitations on the total variation of the velocity of the model and, most significantly, on the asymmetric variation of velocity with depth such that negative velocity excursions are limited. During the iteration, these constraints are relaxed progressively so that the final model is driven principally by the seismic data, but the constraints act to steer the inversion path away from local minima in its early stages. This methodology is applied to portions of the 2004 BP benchmark and Phase I SEAM salt models, recovering an accurate model of the salt body, including its base and flanks, and an accurate model of the subsalt velocity structure, starting from one-dimensional velocity models that are severely cycle skipped. This approach removes entirely the requirement to pick salt boundaries from migrated seismic data, and acts as a form of automatic salt and sediment flooding during full-waveform inversion.
Full-waveform inversion (FWI) can be formulated as a nonlinear least-squares optimization problem. This nonconvex problem can be computationally expensive because it requires repeated solutions of the wave equation. Randomized subsampling techniques allow us to work with small subsets of (monochromatic) source experiments, reducing the computational cost. However, this subsampling may weaken subsurface illumination or introduce subsampling-related incoherent artifacts. These subsampling-related artifacts — in conjunction with the desire to obtain high-fidelity inversion results — motivate us to come up with a technique to regularize this inversion problem. Following earlier work, we have taken advantage of the fact that curvelets represent subsurface models and model perturbations parsimoniously. At first impulse, promoting sparsity on the model directly seemed the most natural way to proceed, but we have determined that in certain cases it can be advantageous to promote sparsity on the Gauss-Newton updates instead. Although constraining the one norm of the descent directions did not change the underlying FWI objective, the constrained model updates remained descent directions, removed subsampling-related artifacts, and improved the overall inversion result. We have empirically observed this phenomenon in situations where the different model updates occurred at roughly the same locations in the curvelet domain. We have further investigated and analyzed this behavior, in which nonlinear inversions benefit from sparsity-promoting constraints on the updates, by means of a set of carefully selected examples including the phase retrieval problem and time-harmonic FWI. In all cases, we have observed a faster decay of the residual and model error as a function of the number of iterations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.