In compressed sensing, one takes n < N samples of an N-dimensional vector x 0 using an n × N matrix A, obtaining undersampled measurements y = Ax 0 . For random matrices with independent standard Gaussian entries, it is known that, when x 0 is k-sparse, there is a precisely determined phase transition: for a certain region in the (k/n,n/N)-phase diagram, convex optimization min || x || 1 subject to y = Ax, x ∈ X N typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property-with the same phase transition location-holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to X N for four different sets X ∈ {[0, 1], R + , R, C}, and the results establish our finding for each of the four associated phase transitions.sparse recovery | universality in random matrix theory equiangular tight frames | restricted isometry property | coherence C ompressed sensing aims to recover a sparse vector x 0 ∈ X N from indirect measurements y = Ax 0 ∈ X n with n < N, and therefore, the system of equations y = Ax 0 is underdetermined. Nevertheless, it has been shown that, under conditions on the sparsity of x 0 , by using a random measurement matrix A with Gaussian i.i.d entries and a nonlinear reconstruction technique based on convex optimization, one can, with high probability, exactly recover x 0 (1, 2). The cleanest expression of this phenomenon is visible in the large n; N asymptotic regime. We suppose that the object x 0 is k-sparse-has, at most, k nonzero entries-and consider the situation where k ∼ ρn and n ∼ δN. Fig. 1A depicts the phase diagram ðρ; δ; Þ ∈ ð0; 1Þ 2 and a curve ρ*ðδÞ separating a success phase from a failure phase. Namely, if ρ < ρ*ðδÞ, then with overwhelming probability for large N, convex optimization will recover x 0 exactly; however, if ρ > ρ*ðδÞ, then with overwhelming probability for large N convex optimization will fail. [Indeed, Fig. 1 depicts four curves ρ*ðδjXÞ of this kind for X ∈ f½0; 1; R + ; R; Cg-one for each of the different types of assumptions that we can make about the entries of x 0 ∈ X N (details below).]How special are Gaussian matrices to the above results? It was shown, first empirically in ref. 3 and recently, theoretically in ref. 4, that a wide range of random matrix ensembles exhibits precisely the same behavior, by which we mean the same phenomenon of separation into success and failure phases with the same phase boundary. Such universality, if exhib...