2015
DOI: 10.1080/01621459.2014.983233
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Factorizations of Big Sparse Tensors

Abstract: It has become routine to collect data that are structured as multiway arrays (tensors). There is an enormous literature on low rank and sparse matrix factorizations, but limited consideration of extensions to the tensor case in statistics. The most common low rank tensor factorization relies on parallel factor analysis (PARAFAC), which expresses a rank k tensor as a sum of rank one tensors. When observations are only available for a tiny subset of the cells of a big tensor, the low rank assumption is not suffi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
47
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 50 publications
(47 citation statements)
references
References 45 publications
0
47
0
Order By: Relevance
“…Matrix and array observations are becoming increasingly available in the big data era thanks to the rapid advance in the information technology and the need to store data in structured forms; see, for example, Li, Kim and Altman [14], Hoff [11], Leng and Tang [13], Zhou, Li and Zhu [24], and Zhou et al [25]. Consider independent and identically distributed matrixvariates X 1 , .…”
Section: Introductionmentioning
confidence: 99%
“…Matrix and array observations are becoming increasingly available in the big data era thanks to the rapid advance in the information technology and the need to store data in structured forms; see, for example, Li, Kim and Altman [14], Hoff [11], Leng and Tang [13], Zhou, Li and Zhu [24], and Zhou et al [25]. Consider independent and identically distributed matrixvariates X 1 , .…”
Section: Introductionmentioning
confidence: 99%
“…This is a generalization of the sparse PARAFAC (sp-PARAFAC) model of [44] to the case of more than two groups, and gives much stronger control over parameter growth than PARAFAC when the truth consists of marginally independent groups of variables. This is shown formally for the special case of graphical models with empty separators in Theorem 4.2.…”
Section: Collapsed Tucker Decompositionsmentioning
confidence: 99%
“…Dunson and Xing [15] showed that a single latent class model is equivalent to a reduced-rank nonnegative PARAFAC decomposition of the joint probability tensor π , while the multiple latent class model in [3] implied a Tucker decomposition. See also [44] and [30] for extensions of these models to more complex settings.…”
Section: Introductionmentioning
confidence: 99%
“…More specifically, the S = 2 solution (total number of components = 6) should not be preferred to the T3 (2, 2, 1) one (approximately same fit, 1 component less). A similar reasoning allowed us to discard the CP solutions with S = 3 and S = 4 components, taking into account the T3 ones in cases (3,3,1) and (4,4,2). In the latter case, the more parsimonious T3 model (10 components) had a higher fit than the CP one with S = 4 (12 components).…”
Section: Three-way Analysismentioning
confidence: 99%
“…[1][2][3][4] However, research on tensor-based methods has a long history. In fact, the first works on tensor decompositions were due to Hitchcock 5,6 and Cattell.…”
mentioning
confidence: 99%