2013
DOI: 10.21236/ada604842
|View full text |Cite
|
Sign up to set email alerts
|

When are Overcomplete Representations Identifiable? Uniqueness of Tensor Decompositions Under Expansion Constraints

Abstract: Overcomplete latent representations have been very popular for unsupervised feature learning in recent years. In this paper, we specify which overcomplete models can be identified given observable moments of a certain order. We consider probabilistic admixture or topic models in the overcomplete regime, where the number of latent topics can greatly exceed the size of the observed word vocabulary. While general overcomplete topic models are not identifiable, we establish generic identifiability under a constrai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(39 citation statements)
references
References 40 publications
(87 reference statements)
0
38
0
1
Order By: Relevance
“…Similar (and yet not the same) expansion conditions have appeared in other contexts involving learning of overcomplete models. For instance, in [5], Anandkumar et. al.…”
Section: Discussionmentioning
confidence: 99%
“…Similar (and yet not the same) expansion conditions have appeared in other contexts involving learning of overcomplete models. For instance, in [5], Anandkumar et. al.…”
Section: Discussionmentioning
confidence: 99%
“…As we know, Tucker models are not identifiable in general -there is linear transformation freedom. This can be alleviated when one can assume sparsity in C [132], G, or both (intuitively, this is because linear transformations generally do not preserve sparsity).…”
Section: F Topic Modelingmentioning
confidence: 99%
“…In addition, variants and generalizations of the problem (I.2) have also been studied in applications regarding control and optimization [25], nonrigid structure from motion [26], spectral estimation and Prony's problem [27], outlier rejection in PCA [28], blind source separation [29], graphical model learning [30], and sparse coding on manifolds [31]; see also [32] and the references therein.…”
Section: A Motivationmentioning
confidence: 99%