Motivated by p-optimization arising from sparse optimization, high-dimensional data analytics and statistics, this paper studies sparse properties of a wide range of p-norm based optimization problems with p > 1, including generalized basis pursuit, basis pursuit denoising, ridge regression, and elastic net. It is well known that when p > 1, these optimization problems lead to less sparse solutions. However, the quantitative characterization of the adverse sparse properties is not available. This paper shows how to exploit optimization and matrix analysis techniques to develop a systematic treatment of a broad class of p-norm based optimization problems for a general p > 1 and show that their optimal solutions attain full support, and thus have the least sparsity, for almost all measurement matrices and measurement vectors. Comparison to p-optimization with 0 < p ≤ 1 and implications for robustness as well as extensions to the complex setting are also given. These results shed light on analysis and computation of general p-norm based optimization problems in various applications.
In this paper, we study the solution uniqueness of an individual feasible vector of a class of convex optimization problems involving convex piecewise affine functions and subject to general polyhedral constraints. This class of problems incorporates many important polyhedral constrained ℓ 1 recovery problems arising from sparse optimization, such as basis pursuit, LASSO, and basis pursuit denoising, as well as polyhedral gauge recovery. By leveraging the max-formulation of convex piecewise affine functions and convex analysis tools, we develop dual variables based necessary and sufficient uniqueness conditions via simple and yet unifying approaches; these conditions are applied to a wide range of ℓ 1 minimization problems under possible polyhedral constraints. An effective linear program based scheme is proposed to verify solution uniqueness conditions. The results obtained in this paper not only recover the known solution uniqueness conditions in the literature by removing restrictive assumptions but also yield new uniqueness conditions for much broader constrained ℓ 1 -minimization problems. and shenj@umbc.edu. develop solution uniqueness conditions for several important ℓ 1 minimization problems and their variations, e.g., basis pursuit (BP), the least absolute shrinkage and selection operator (LASSO), and basis pursuit denoising (BPDN). The recent paper [9] gives another proof of the uniqueness conditions of basis pursuit established in [25] and clarifies geometric meanings of these conditions with extensions to polyhedral gauge recovery. Motivated by constrained sparse signal recovery [6,10,24], solution uniqueness of basis pursuit under the nonnegative constraint is studied in [27]. However, solution uniqueness of ℓ 1 minimization under general polyhedral constraints has been not fully addressed, despite various polyhedral constraints in applications, e.g., the monotone cone constraint in order statistics, and the polyhedral constraint in the Dantzig selector [4] (cf. Section 3.5).Inspired by the lack of solution uniqueness conditions under general polyhedral constraints and the fact that the ℓ 1 -norm is a special convex piecewise affine (PA) function, we study a broad class of convex optimization problems involving convex PA functions and subject to general linear inequality constraints, and we develop necessary and sufficient solution uniqueness conditions for an individual feasible vector. This general framework incorporates many important ℓ 1 minimization problems under possible inequality constraints, such as BP, LASSO, BPDN, and polyhedral gauge recovery. Different from the techniques developed in a similar framework in [9], we exploit the max-formulation of a convex PA function (cf. Section 2). The max-formulation leads to much simpler, yet unifying and systematic, approaches to establish solution uniqueness conditions for a wide range of problems; see Remark 3.1 for comparison. These approaches not only recover all the known solution uniqueness conditions in the literature by removing restrictive ...
The l1 regularized least square problem, or the lasso, is a non-smooth convex minimization which is widelyused in diverse fields. However, solving such a minimization is not straightforward since it is not differentiable. In this paper, an equivalent smooth minimization with box constraints is obtained, and it is proved to be equivalent to the lasso problem. Accordingly, an efficient recurrent neural network is developed which guarantees to globally converge to the solution of the lasso. Further, it is investigated that the property "the dual of dual is primal" holds for the l1 regularized least square problem. The experiments on image and signal recovery illustrate the reasonable performance of the proposed neural network.Index Terms-sparse, l1 regularization, smooth, neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.