2010 6th International Symposium on Turbo Codes &Amp; Iterative Information Processing 2010
DOI: 10.1109/istc.2010.5613803
|View full text |Cite
|
Sign up to set email alerts
|

IMP: A message-passing algorithm for matrix completion

Abstract: A new message-passing (MP) method is considered for the matrix completion problem associated with recommender systems. We attack the problem using a (generative) factor graph model that is related to a probabilistic low-rank matrix factorization. Based on the model, we propose a new algorithm, termed IMP, for the recovery of a data matrix from incomplete observations. The algorithm is based on a clustering followed by inference via MP (IMP). The algorithm is compared with a number of other matrix completion al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2012
2012
2018
2018

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…Applying these approximations to (26) and absorbing -invariant terms into the term, we obtain (27) where we used the relationship (28) and defined…”
Section: Approximated Factor-to-variable Messagesmentioning
confidence: 99%
See 1 more Smart Citation
“…Applying these approximations to (26) and absorbing -invariant terms into the term, we obtain (27) where we used the relationship (28) and defined…”
Section: Approximated Factor-to-variable Messagesmentioning
confidence: 99%
“…In our framework, would be chosen to induce sparsity, would represent the noiseless observations, and would model the (possibly noisy) observation mechanism. While a plethora of approaches to these problems have been proposed based on optimization techniques (e.g., [5]- [15]), greedy methods (e.g., [16]- [20]), Bayesian sampling methods (e.g., [21], [22]), variational methods (e.g., [23]- [27]), and discrete message passing (e.g., [28]), ours is based on the Approximate Message Passing (AMP) framework, an instance of loopy belief propagation (LBP) [29] that was recently developed to tackle linear [30]- [32] and generalized linear [33] inference problems encountered in the context of compressive sensing (CS). In the generalized-linear CS problem, one estimates from observations that are statistically coupled to the transform outputs through a separable likelihood function , where in this case the transform is fixed and known.…”
mentioning
confidence: 99%
“…For higher values of the rank r, rigidity of the graph is related to uniqueness of the solution of the reconstruction problem [SC09]. Finally, message passing algorithms for this problem were studied in [KYP10,KM10a].…”
Section: Matrix Completionmentioning
confidence: 99%
“…In our framework, {p x nl } would be chosen to induce sparsity, Z = AX would represent the noiseless observations, and {p y ml |z ml } would model the (possibly noisy) observation mechanism. While a plethora of approaches to these problems have been proposed based on optimization techniques (e.g., [4]- [14]), greedy methods (e.g., [15]- [19]), Bayesian sampling methods (e.g., [20], [21]), variational methods (e.g., [22]- [26]), and discrete message passing (e.g., [27]), ours is based on the Approximate Message Passing (AMP) framework, an instance of loopy belief propagation (LBP) [28] that was recently developed to tackle linear [29]- [31] and generalized linear [32] inference problems encountered in the context of compressive sensing (CS). In the generalized-linear CS problem, one estimates x ∈ R N from observations y ∈ R M that are statistically coupled to the transform outputs z = Ax through a separable likelihood function p y|z (y|z), where in this case the transform A is fixed and known.…”
Section: Introductionmentioning
confidence: 99%