2014
DOI: 10.1109/tsp.2014.2357776
|View full text |Cite
|
Sign up to set email alerts
|

Bilinear Generalized Approximate Message Passing—Part I: Derivation

Abstract: Abstract-In this paper, we extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. Here, in Part I of a two-part paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation alg… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
250
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 232 publications
(276 citation statements)
references
References 47 publications
1
250
0
1
Order By: Relevance
“…Finally, the JUICESD-RiGm algorithm is obtained by replacing (24) and (26) l,t = 0; for q = 1, 2, ..., Q Calculate v z l,t andẑ l,t , ∀l, t, based on (13)-(16); Calculate v y k,t andŷ k,t , ∀k, t, based on (17)- (22); end \\ CCE module Calculate the message of g k in each time slot according to (23); Calculate the message of g k by combining multiple messages in multiple time slots and the a-priori distribution of g k according to (27) and (25);…”
Section: Cce Based On Rigm Approximationmentioning
confidence: 99%
“…Finally, the JUICESD-RiGm algorithm is obtained by replacing (24) and (26) l,t = 0; for q = 1, 2, ..., Q Calculate v z l,t andẑ l,t , ∀l, t, based on (13)-(16); Calculate v y k,t andŷ k,t , ∀k, t, based on (17)- (22); end \\ CCE module Calculate the message of g k in each time slot according to (23); Calculate the message of g k by combining multiple messages in multiple time slots and the a-priori distribution of g k according to (27) and (25);…”
Section: Cce Based On Rigm Approximationmentioning
confidence: 99%
“…Well-known algorithms include the multiplicative update [6], alternating projected gradient methods [16], alternating nonnegative least squares (ANLS) with the active set method [17] and a few recent methods such as the bilinear generalized approximate message passing [18], [19], as well as methods based on the block coordinate descent [20]. These methods often possess strong convergence guarantees (to Karush-Kuhn-Tucker (KKT) points of the NMF problem) and most of them lead to satisfactory performance in practice; see [8] and the references therein for detailed comparison and comments for different algorithms.…”
Section: A Related Workmentioning
confidence: 99%
“…1 To initialize for EM-BiG-AMP, we adapt the procedure outlined in [15] to our matrix-completion problem, giving the EM initializations and…”
Section: Em-big-ampmentioning
confidence: 99%
“…(1) (2) and where the likelihood function is known and separable, i.e., (3) In Part I of the work [1], we proposed and derived the BiG-AMP algorithm, whose general form is summarized in [1,TABLE III]. We also uncovered special cases under which the general approach can be simplified, such as the scalar-variance BiG-AMP under possibly incomplete additive white Gaussian noise (PIAWGN) observations, as summarized in [1,TABLE IV], and its specialization to Gaussian priors, as summarized by the BiG-AMP-Lite algorithm in [1, Table V].…”
mentioning
confidence: 99%
See 1 more Smart Citation