Notations and outlineWe describe the notation that we will use throughout the paper. We use regular lowercase letters for scalars, bold lowercase letters for vectors, BOLD CAPITAL LETTERS for matrices, and normal CAPITAL LETTERS for sets. We use subscript i for i-th element. Superscript represents iteration steps (k for outer iteration and j for inner loop). For a vector v, ∥v∥ l0 , ∥v∥ l2 , and ∥v∥ l∞ denote l 0 -, l 2 -, and l ∞ -norms, respectively. supp(v) represents the nonzero elements of v. For a vector v ∈ R n , u = max{v, 0} denotesFor a matrix Φ ∈ R m×n and an index set A ⊆ {1, 2, • • • , n}, Φ A stands for the submatrix of Φ obtained by keeping only the columns indexed by A. Also, v A denotes the vector obtained by keeping only the components of a vector v ∈ R n indexed by A.
We study the sparse phase retrieval problem, recovering an $s$-sparse length-$n$ signal from $m$ magnitude-only measurements. Two-stage non-convex approaches have drawn much attention in recent studies for this problem. Despite non-convexity, many two-stage algorithms provably converge to the underlying solution linearly when appropriately initialized. However, in terms of sample complexity, the bottleneck of those algorithms with Gaussian random measurements often comes from the initialization stage. Although the refinement stage usually needs only $m=\Omega(s\log n)$ measurements, the widely used spectral initialization in the initialization stage requires $m=\Omega(s^2\log n)$ measurements to produce a desired initial guess, which causes the total sample complexity order-wisely more than necessary. To reduce the number of measurements, we propose a truncated power method to replace the spectral initialization for non-convex sparse phase retrieval algorithms. We prove that $m=\Omega(\bar{s} s\log n)$ measurements, where $\bar{s}$ is the stable sparsity of the underlying signal, are sufficient to produce a desired initial guess. When the underlying signal contains only very few significant components, the sample complexity of the proposed algorithm is $m=\Omega(s\log n)$ and optimal. Numerical experiments illustrate that the proposed method is more sample-efficient than state-of-the-art algorithms.
In this paper, we study a concatenate coding scheme based on sparse regression code (SPARC) and tree code for unsourced random access in massive multiple-input and multiple-output (MIMO) systems.Our focus is concentrated on efficient decoding for the inner SPARC with practical concerns. A two-stage method is proposed to achieve near-optimal performance while maintaining low computational complexity. Specifically, an one-step thresholding-based algorithm is first used for reducing large dimensions of the SPARC decoding, after which an relaxed maximum-likelihood estimator is employed for refinement.Adequate simulation results are provided to validate the near-optimal performance and the low computational complexity. Besides, for covariance-based sparse recovery method, theoretical analyses are given to characterize the upper bound of the number of active users supported when convex relaxation is considered, and the probability of successful dimension reduction by the one-step thresholding-based algorithm.
In this work we propose a nonconvex two-stage stochastic alternating minimizing (SAM) method for sparse phase retrieval. The proposed algorithm is guaranteed to have an exact recovery from O(s log n) samples if provided the initial guess is in a local neighbour of the ground truth. Thus, the proposed algorithm is two-stage, first we estimate a desired initial guess (e.g. via a spectral method), and then we introduce a randomized alternating minimization strategy for local refinement. Also, the hard-thresholding pursuit algorithm is employed to solve the sparse constraint least square subproblems. We give the theoretical justifications that SAM find the underlying signal exactly in a finite number of iterations (no more than O(log m) steps) with high probability. Further, numerical experiments illustrates that SAM requires less measurements than state-of-the-art algorithms for sparse phase retrieval problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.