2010
DOI: 10.1007/s10107-010-0422-2
|View full text |Cite
|
Sign up to set email alerts
|

Null space conditions and thresholds for rank minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
110
1

Year Published

2011
2011
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 122 publications
(113 citation statements)
references
References 29 publications
2
110
1
Order By: Relevance
“…To answer the raised question probabilistically, represent the linear map in the matrix form as , where is a matrix and is a vector obtained from by stacking up the columns of . It is shown in [14] that, as goes to infinity, the probability that optimizations (34) and (35) have the same solution is equal to one if the entries of matrix are independently sampled from a zero-mean unit-variance Gaussian distribution. In other words, the aforementioned heuristic method works almost always correctly for a standard rank-minimization problem whose linear constraints are randomly generated using a Gaussian probability distribution.…”
Section: Heuristic Methods For Rank Minimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…To answer the raised question probabilistically, represent the linear map in the matrix form as , where is a matrix and is a vector obtained from by stacking up the columns of . It is shown in [14] that, as goes to infinity, the probability that optimizations (34) and (35) have the same solution is equal to one if the entries of matrix are independently sampled from a zero-mean unit-variance Gaussian distribution. In other words, the aforementioned heuristic method works almost always correctly for a standard rank-minimization problem whose linear constraints are randomly generated using a Gaussian probability distribution.…”
Section: Heuristic Methods For Rank Minimizationmentioning
confidence: 99%
“…Since matrix is nonsingular, applying the Sylvester's Law of Inertia to the relation yields (14) On the other hand, it can be concluded from the Hamiltonian structure of matrix that (15) Furthermore, since every eigenvalue of is an eigenvalue of and since all eigenvalues of are positive, the quantity is at least equal to . In light of the equalities (14) and (15), the relation is possible only if . Thus, matrix has negative eigenvalues.…”
Section: B Passive Control Unitmentioning
confidence: 99%
“…We show that OptSpace is also capable of finding the missing nearby distances in our scenario and hence provide us with their corresponding ToFs. To the best of our knowledge, all the above work, as well as the recent matrix completion algorithms [17], [18], only deal with the random missing entries. However, in our case, we have structured missing entries in addition to random ones (see Section II-B), an aspect that was absent from the previous work.…”
Section: B Related Workmentioning
confidence: 99%
“…In this case, one can show that, (17) Case 2 : In this case, the minimum area is achieved when the center of the circle is on the exterior boundary as in Fig. 13(b), where Thus, we will have If we assume that , which is a reasonable assumption according to the problem statement, we will have (18) Combining (17) and (18), we can find the lower bound for as …”
Section: Upper Bound Onmentioning
confidence: 99%
“…but also for the bound itself. The proof is based partly on the methodology of [25], where the minimum nuclear norm relaxation of the rank minimization problem is analyzed under linear constraints on the unknown matrix. In contrast, related probabilistic analysis in [11] and [5] is based on a generalization of the restricted isometry property of A that serves only as a sufficient condition for the exactness of the convex relaxation; see also [8].…”
Section: B Probability Boundmentioning
confidence: 99%