Quantized compressive sensing (QCS) deals with the problem of coding compressive measurements of low-complexity signals with quantized, finite precision representations, i.e., a mandatory process involved in any practical sensing model. While the resolution of this quantization clearly impacts the quality of signal reconstruction, there actually exist incompatible combinations of quantization functions and sensing matrices that proscribe arbitrarily low reconstruction error when the number of measurements increases.This work shows that a large class of random matrix constructions known to respect the restricted isometry property (RIP) is "compatible" with a simple scalar and uniform quantization if a uniform random vector, or a random dither, is added to the compressive signal measurements before quantization. In the context of estimating low-complexity signals (e.g., sparse or compressible signals, low-rank matrices) from their quantized observations, this compatibility is demonstrated by the existence of (at least) one signal reconstruction method, the projected back projection (PBP), whose reconstruction error decays when the number of measurements increases. Interestingly, given one RIP matrix and a single realization of the dither, a small reconstruction error can be proved to hold uniformly for all signals in the considered lowcomplexity set. We confirm these observations numerically in several scenarios involving sparse signals, low-rank matrices, and compressible signals, with various RIP matrix constructions such as sub-Gaussian random matrices and random partial discrete cosine transform (DCT) matrices.In this context, we prove the above-mentioned compatibility between the QCS model (4) and the class of RIP matrices as follows. Defining P K (z) as the closest point to z in K (see Sec. 3.3), we demonstrate that the simple projected back projection (PBP)of the quantized measurements y onto the set K achieves a reconstruction error x−x that decays like O(m −1/p ) when m increases, for some p > 1 only depending on K.With this respect, the main results of this paper can be summarized as follows (see Sec. 7 for their accurate descriptions). We show first that if K is the set of sparse vectors, the set of lowrank matrices 3 , or any finite union of low-dimensional subspaces (e.g., model-based sparsity [26] 2 In this work, the term "resolution" does not refer to the number of bits used to encode the quantized values [25]. 3 Up to the identification of these matrices with their vector representation (see Sec. 4.2).3 or group-sparse models [27]), and in the case where ξ ∼ U m ([0, δ]) and 1 √ m Φ is generated from a random matrix distribution (see Def. 3.2) known to generate w.h.p. RIP matrices over K (and its multiples, see Sec. 3.1), then, w.h.p. over both Φ and ξ, and uniformly over all x ∈ K ∩ B n sensed from (4), PBP achieves the reconstruction errorwith C K depending on K and up to omitted log factors in the involved dimensions.Second, if K is a bounded, convex and symmetric set of R n , e.g., the set of co...