In this paper, we investigate conditions for the unique recoverability of sparse integer-valued signals from few linear measurements. Both the objective of minimizing the number of nonzero components, the so-called ℓ 0 -norm, as well as its popular substitute, the ℓ 1 -norm, are covered. Furthermore, integer constraints and possible bounds on the variables are investigated. Our results show that the additional prior knowledge of signal integrality allows for recovering more signals than what can be guaranteed by the established recovery conditions from (continuous) compressed sensing. Moreover, even though the considered problems are NP-hard in general (even with an ℓ 1 -objective), we investigate testing the ℓ 0 -recovery conditions via some numerical experiments; it turns out that the corresponding problems are quite hard to solve in practice. However, medium-sized instances of ℓ 0 -and ℓ 1 -minimization with binary variables can be solved exactly within reasonable time.
Index TermsSparse recovery, compressed sensing, integrality constraints, nullspace conditions
I. INTRODUCTIONT HE recovery of sparse signals has received a tremendous interest in recent years. The basic setting without noise is as follows: Under the prior knowledge that a measurement vector b ∈ R m \ {0} is generated by a sparse signal x ∈ R n via Ax = b, where A ∈ R m×n with rank(A) = m < n is the sensing matrix, the question is whether x can be uniquely recovered, given A and b. Thus, one approach is to find the sparsest x that explains the measurements, i.e., one minimizes x 0 := |{i ∈ {1, . . . , n} : x i = 0}| under the constraint Ax = b. However, this problem is NP-hard, see Garey and Johnson [1]. The crucial idea in this context (see, e.g., Chen et al. [2]) is to replace x 0 by the ℓ 1 -norm x 1 := |x 1 | + · · · + |x n |, which results in a convex problem that can even be cast as a linear program (LP) and is therefore tractable. The literature contains an abundance of conditions under which minimizers of x 1 subject to Ax = b are unique and equal to the sparsest solution; at this place, we refer to the book by Foucart and Rauhut [3] for more information and an overview of selected specialized algorithms to solve the ℓ 1 -minimization problem.The key point for the mentioned series of striking results is the prior knowledge that b can be sparsely represented or approximated. A natural question is whether further knowledge about the structure of the representations x can lead to stronger results about the recoverability. In general terms, the two problems from above can be written aswhere X ⊆ R n is a constraint set representing further restrictions on the representations. The "classical" results in the literature refer to the case X = R n . One main example in which X = R n is the case in which x has to be nonnegative, i.e., X = R n + , see, for instance, Donoho and Tanner [4], Bruckstein et al. [5] and Khajehnejad et al. [6].In this paper, we investigate the case in which x is required to be integer, i.e., X ⊆ Z n . This is motivated by ...