Abstract-This paper investigates the theoretical guarantees of ℓ 1 -analysis regularization when solving linear inverse problems. Most of previous works in the literature have mainly focused on the sparse synthesis prior where the sparsity is measured as the ℓ 1 norm of the coefficients that synthesize the signal from a given dictionary. In contrast, the more general analysis regularization minimizes the ℓ 1 norm of the correlations between the signal and the atoms in the dictionary, where these correlations define the analysis support. The corresponding variational problem encompasses several well-known regularizations such as the discrete total variation and the Fused Lasso.Our main contributions consist in deriving sufficient conditions that guarantee exact or partial analysis support recovery of the true signal in presence of noise. More precisely, we give a sufficient condition to ensure that a signal is the unique solution of the ℓ 1 -analysis regularization in the noiseless case. The same condition also guarantees exact analysis support recovery and ℓ 2 -robustness of the ℓ 1 -analysis minimizer vis-à-vis an enough small noise in the measurements. This condition turns to be sharp for the robustness of the analysis support. To show partial support recovery and ℓ 2 -robustness to an arbitrary bounded noise, we introduce a stronger sufficient condition. When specialized to the ℓ 1 -synthesis regularization, our results recover some corresponding recovery and robustness guarantees previously known in the literature. From this perspective, our work is a generalization of these results. We finally illustrate these theoretical findings on several examples to study the robustness of the 1-D total variation and Fused Lasso regularizations.
This paper is concerned with the convergence of over-relaxations of FB algorithm (in particular FISTA), in the case when proximal maps and/or gradients are computed with a possible error. We show that provided these errors are small enough, then the algorithm still converges to a minimizer of the functional, and with a speed of convergence (in terms of values of the functional) that remains the same as in the noise free case. We also show that larger errors can be allowed, using a lower over-relaxation than FISTA. This still leads to the convergence of iterates, and with ergodic convergence speed faster than the classical FB algorithm and FISTA.
In this paper we study the convergence of an Inertial Forward-Backward algorithm, with a particular choice of an over-relaxation term. In particular we show that for a sequence of overrelaxation parameters, that do not satisfy Nesterov's rule one can still expect some relatively fast convergence properties for the objective function. In addition we complement this work by studying the convergence of the algorithm in the case where the proximal operator is inexactly computed with the presence of some errors and we give sufficient conditions over these errors in order to obtain some convergence properties for the objective function .
In this paper, we are concerned with regularized regression problems where the prior regularizer is a proper lower semicontinuous and convex function which is also partly smooth relative to a Riemannian submanifold. This encompasses as special cases several known penalties such as the Lasso ( 1 -norm), the group Lasso ( 1 − 2 -norm), the ∞ -norm, and the nuclear norm. This also includes so-called analysis-type priors, i.e. composition of the previously mentioned penalties with linear operators, typical examples being the total variation or fused Lasso penalties. We study the sensitivity of any regularized minimizer to perturbations of the observations and provide its precise local parameterization. Our main sensitivity analysis result shows that the predictor moves locally stably along the same active submanifold as the observations undergo small perturbations. This local stability is a consequence of the smoothness of the regularizer when restricted to the active submanifold, which in turn plays a pivotal role to get a closed form expression for the variations of the predictor w.r.t. observations. We also show that, for a variety of regularizers, including polyhedral ones or the group Lasso and its analysis counterpart, this divergence formula holds Lebesgue almost everywhere. When the perturbation is random (with an appropriate continuous distribution), this allows us to derive an unbiased estimator of the degrees of freedom and of the risk of the estimator prediction. Our results hold true without requiring the design matrix to be full column rank. They generalize those already known in the literature such as the Lasso problem, the general Lasso problem (analysis 1 -penalty), or the group Lasso where existing results for the latter assume that the design is full column rank.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.