2017
DOI: 10.1109/tsp.2017.2695450
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Low Rank Matrix Completion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…where A(k) ∈ R (2m−2)×(2m+2) and 0 is the all zero matrix of appropriate size. At time k, robots estimate the position of the leader p 1 (k + 1) and follow it while maintaining the shape described by (33). The further details about the constraint in (33) can be found in [28] and discussed in Appendix D of the supplementary material.…”
Section: ) Problem Formulationmentioning
confidence: 99%
See 1 more Smart Citation
“…where A(k) ∈ R (2m−2)×(2m+2) and 0 is the all zero matrix of appropriate size. At time k, robots estimate the position of the leader p 1 (k + 1) and follow it while maintaining the shape described by (33). The further details about the constraint in (33) can be found in [28] and discussed in Appendix D of the supplementary material.…”
Section: ) Problem Formulationmentioning
confidence: 99%
“…At time k, robots estimate the position of the leader p 1 (k + 1) and follow it while maintaining the shape described by (33). The further details about the constraint in (33) can be found in [28] and discussed in Appendix D of the supplementary material. To this end, the leader broadcasts a signal that is subsequently used by the robots in order to estimate p 1 (k + 1).…”
Section: ) Problem Formulationmentioning
confidence: 99%
“…Compared with other approximate matrix methods such as Incomplete Cholesky (ICD), Nystron method is faster in solving problems. The main idea of the Nystrom method is to obtain the approximate matrixK of the original matrix K by reducing the rank, which is equivalent to taking m rows and m columns randomly from K. This approximationK can be written asK = K n,m K −1 m,m K m,n , where K n,m is the n×m block matrix in the original matrix K, but m n. The low-rank approximation of the matrix is usually used to approximate kernel matrix [20], which has been used in combination with the kernel SVM algorithm. However, the optimization algorithm for solving LR usually requires iteration, so the kernel trick used in this algorithm is very inefficient.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, a method based on matrix completion and compressed sensing [17,18] is presented and referred to as sparse low-rank matrix completion (SLR-MC). Different from the method proposed in [16], the low-rank part and sparse part of corrupted matrix were recovered by matrix completion and compressed sensing individually.…”
Section: Introductionmentioning
confidence: 99%