2019
DOI: 10.1002/cpa.21850
|View full text |Cite
|
Sign up to set email alerts
|

Fast Binary Embeddings and Quantized Compressed Sensing with Structured Matrices

Abstract: This paper deals with two related problems, namely distance‐preserving binary embeddings and quantization for compressed sensing. First, we propose fast methods to replace points from a subset Χ ⊂ ℝn, associated with the euclidean metric, with points in the cube {±1}m, and we associate the cube with a pseudometric that approximates euclidean distance among points in Χ. Our methods rely on quantizing fast Johnson‐Lindenstrauss embeddings based on bounded orthonormal systems and partial circulant ensembles, both… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
37
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(37 citation statements)
references
References 70 publications
0
37
0
Order By: Relevance
“…This simple algorithm first finds a GMRA center that quantizes to a bit sequence close to the quantized measurements, where "closeness" is determined using a pseudometric that respects the quantization; it then optimizes over all points in the associated approximate tangent space to enforce, as much as possible, the consistency of the quantization. Using the results of [8] we prove that the quantization error associated with our proposed reconstruction algorithm decays polynomially or exponentially as a function of the number of measurements, depending on the quantization scheme. This greatly improves on the sub-linear error decay associated with scalar quantization in [9].…”
Section: Imentioning
confidence: 90%
See 2 more Smart Citations
“…This simple algorithm first finds a GMRA center that quantizes to a bit sequence close to the quantized measurements, where "closeness" is determined using a pseudometric that respects the quantization; it then optimizes over all points in the associated approximate tangent space to enforce, as much as possible, the consistency of the quantization. Using the results of [8] we prove that the quantization error associated with our proposed reconstruction algorithm decays polynomially or exponentially as a function of the number of measurements, depending on the quantization scheme. This greatly improves on the sub-linear error decay associated with scalar quantization in [9].…”
Section: Imentioning
confidence: 90%
“…Thus, it is necessary to consider the effect of quantization in the design of the recovery algorithms. Indeed, sparse vector recovery and low-rank matrix recovery have been studied in the presence of various quantization schemes [7], [8], [11], [12], [13]. We look to extend these results to account for those structured signals that lie on a compact, lowdimensional submanifold of R N for which we have a Geometric Multi-Resolution Analysis (GMRA) [1].…”
Section: Imentioning
confidence: 99%
See 1 more Smart Citation
“…Iwen and Saab [23] used probabilistic arguments and the property of efficient storage to construct random quantization schemes with exponential error decay rate with respect to the bit usage. In [21], similar ideas are used on Σ∆. Moreover, the connection between decimation and distributed noise shaping can be seen in it.…”
Section: 2mentioning
confidence: 99%
“…Moreover, the connection between decimation and distributed noise shaping can be seen in it. [23,21] both use probabilistic arguments that only ensure success with some probability instead of deterministic guarantee. For the explicit and deterministic adaptation to finite dimensional signals, the author proved in [24] that there exists a similar operator called the alternative decimation operator that behaves similarly to the decimation for bandlimited signals.…”
Section: 2mentioning
confidence: 99%