2012 **Abstract:** An algorithm is described for the nonnegative rank factorization (NRF) of some completely positive (CP) matrices whose rank is equal to their CP-rank. The algorithm can compute the symmetric NRF of any nonnegative symmetric rank-r matrix that contains a diagonal principal submatrix of that rank and size with leading cost O(rm 2) operations in the dense case. The algorithm is based on geometric considerations and is easy to implement. The implications for matrix graphs are also discussed.

Help me understand this report

Search citation statements

Paper Sections

Select...

2

1

1

Citation Types

0

16

0

Year Published

2012

2020

Publication Types

Select...

6

1

Relationship

0

7

Authors

Journals

(16 citation statements)

0

16

0

“…Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php proof. Kalofolias and Gallopoulos [17] extend this result and construct a factorization of completely positive rank-two matrices.…”

mentioning

confidence: 74%

“…Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php proof. Kalofolias and Gallopoulos [17] extend this result and construct a factorization of completely positive rank-two matrices.…”

mentioning

confidence: 74%

“…The case of rank(A) = 2 has been studied in [4] and [1] where an algorithmic process for an exact NRF of A is proposed but our way is very simple and a part of our general method of matrix factorization. In [5], an exact, symmetric nonnegative rank factorization of A, i.e. A = W W T , is determined in the case where A is a symmetric n × n nonnegative real matrix which contains a diagonal principal submatrix of the same rank with A.…”

confidence: 99%

“…Algorithm 4 can be easily adapted to handle (19), by replacing the b ij 's with b ij + Λ j . In fact, the derivative of the penalty term only influences the constant part in the gradient; see (12).…”

confidence: 99%

“…In fact, the derivative of the penalty term only influences the constant part in the gradient; see (12). However, it seems the solutions of (19) are very sensitive to the parameter Λ and hence are difficult to tune. Note that another way to identify sparser factors is simply to increase the factorization rank r, or to sparsify the input matrix A (only keeping the important edges in the graph induced by A; see [1] and the references therein) -in fact, a sparser matrix A induces sparser factors since…”

confidence: 99%