2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7952996
|View full text |Cite
|
Sign up to set email alerts
|

Matrix completion of noisy graph signals via proximal gradient minimization

Abstract: Abstract-This paper takes on the problem of recovering the missing entries of an incomplete matrix, which is known as matrix completion, when the columns of the matrix are signals that lie on a graph and the available observations are noisy. We solve a version of the problem regularized with the Laplacian quadratic form by means of the proximal gradient method, and derive theoretical bounds on the recovery error. Moreover, in order to speed up the convergence of the proximal gradient, we propose an initializat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…Plugging the latter into the objective of (2) and substituting W = K w B and H = K h C, yields (15) is the objective used by the inductive MC [16]; and therefore, we have shown that inductive MC is a special case of KMC. This leads to the following result.…”
Section: Theorem 2 If the Kmc Hypothesis Class Is Fmentioning
confidence: 99%
“…Plugging the latter into the objective of (2) and substituting W = K w B and H = K h C, yields (15) is the objective used by the inductive MC [16]; and therefore, we have shown that inductive MC is a special case of KMC. This leads to the following result.…”
Section: Theorem 2 If the Kmc Hypothesis Class Is Fmentioning
confidence: 99%
“…, where b n,m and c l,m are the entries at (n, m) and (l, m) of the factor matrices B and C from (17), respectively. Therefore, ( 19) can be rewritten as…”
Section: Kronecker Kernel Mcexmentioning
confidence: 99%
“…One difference between the two loss functions is that (29) does not explicitly limit the rank of the recovered matrix F = unvec(v R ) since it has N L degrees of freedom through γ, while in (17) the rank of F cannot exceed p since B and C are of rank p at most. In fact, the low-rank property is indirectly promoted in (29) through the kernel matrices.…”
Section: Kronecker Kernel Mcexmentioning
confidence: 99%
See 2 more Smart Citations