2016
DOI: 10.1186/s13634-016-0369-4
|View full text |Cite
|
Sign up to set email alerts
|

One-bit compressive sampling via ℓ 0 minimization

Abstract: The problem of 1-bit compressive sampling is addressed in this paper. We introduce an optimization model for reconstruction of sparse signals from 1-bit measurements. The model targets a solution that has the least 0 -norm among all signals satisfying consistency constraints stemming from the 1-bit measurements. An algorithm for solving the model is developed. Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the 0 -n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…To the best of our knowledge, combining CS with SWL techniques has not been attempted in the literature. There are, however, several works on 1-bit CS [8][9][10] that are quite different from the present approach. In 1-bit CS, the limiting case of 1-bit measurements is considered by preserving just the sign information of random samples or measurements and treating the measurements as sign constraints to be enforced in reconstruction.…”
Section: Introductionmentioning
confidence: 74%
See 2 more Smart Citations
“…To the best of our knowledge, combining CS with SWL techniques has not been attempted in the literature. There are, however, several works on 1-bit CS [8][9][10] that are quite different from the present approach. In 1-bit CS, the limiting case of 1-bit measurements is considered by preserving just the sign information of random samples or measurements and treating the measurements as sign constraints to be enforced in reconstruction.…”
Section: Introductionmentioning
confidence: 74%
“…Five anchors are selected such that they all are in the radio range of the target as it moves along a helical path according to Equation (12). The x-, y-, and z-dimensions of the anchors are taken as [10,100,10,100,90], [100,90,70,80,90] and [10,10,100,100,150] meters, respectively. The helix constants are r = 40, K = 20 and θ varies from zero to 2π.…”
Section: Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…γ,ρ (x) + η, we have x ∈ S, which along with supp(x) ⊇ J implies that supp(x) = J (if not, we will have Ξ σ,γ (x) + λρ i∈J |x i | + λρ i∈supp(x)\J |x i | < G σ,γ,ρ (x) < Ξ σ,γ (x) + λρ i∈J |x i | + η, which along with (40) and…”
Section: End (While)mentioning
confidence: 98%
“…However, as in the conventional CS, the 1 -norm convex relaxation not only has a weak sparsity-promoting ability but also leads to a biased solution; see the discussion in [14]. Motivated by this, many researchers resort to the nonconvex surrogate functions of the zero-norm, such as the minimax concave penalty (MCP) [53,20], the sorted 1 penalty [20], the logarithmic smoothing functions [40], the q (0 < q < 1)-norm [15], and the Schurconcave functions [32], and then develop algorithms for solving the associated nonconvex surrogate problems to achieve a better sparse solution. To the best of our knowledge, most of these algorithms are lack of convergence certificate.…”
Section: Review On the Related Workmentioning
confidence: 99%