2016
DOI: 10.1137/16m1078318
|View full text |Cite
|
Sign up to set email alerts
|

Low-Rank Approximation and Completion of Positive Tensors

Abstract: Abstract. Unlike the matrix case, computing low-rank approximations of tensors is NP-hard and numerically ill-posed in general. Even the best rank-1 approximation of a tensor is NP-hard. In this paper, we use convex optimization to develop polynomial-time algorithms for low-rank approximation and completion of positive tensors. Our approach is to use algebraic topology to define a new (numerically well-posed) decomposition for positive tensors, which we show is equivalent to the standard tensor decomposition i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 50 publications
0
7
0
Order By: Relevance
“…Estimation consistency in any statistical setting (including inverse optimization with noisy data) requires that an identifiability condition holds, and such identifiability conditions can be stated under a variety of different mathematical formulations (Wald 1949, Jennrich 1969, Bartlett and Mendelson 2002, Greenshtein and Ritov 2004, Bickel and Doksum 2006, Chatterjee 2014, Aswani 2015). The intuition for these different formulations is the same: Essentially, an identifiability condition states that the output of the model is different for two distinct sets of model parameters.…”
Section: B Identifiability In Inverse Optimizationmentioning
confidence: 99%
“…Estimation consistency in any statistical setting (including inverse optimization with noisy data) requires that an identifiability condition holds, and such identifiability conditions can be stated under a variety of different mathematical formulations (Wald 1949, Jennrich 1969, Bartlett and Mendelson 2002, Greenshtein and Ritov 2004, Bickel and Doksum 2006, Chatterjee 2014, Aswani 2015). The intuition for these different formulations is the same: Essentially, an identifiability condition states that the output of the model is different for two distinct sets of model parameters.…”
Section: B Identifiability In Inverse Optimizationmentioning
confidence: 99%
“…Tensor completion is the problem of observing (possibly with noise) a subset of entries of a tensor and then estimating the remaining entries based on an assumption of low-rankness. The tensor completion problem is encountered in a number of important applications, including computer vision (Liu et al, 2012;Zhang et al, 2019), regression with only categorical variables (Aswani, 2016), healthcare (Gandy et al, 2011;Dauwels et al, 2011), and many other domains (Song et al, 2019).…”
Section: Past Approaches To Tensor Completionmentioning
confidence: 99%
“…However, there are a few special cases of tensors where algorithms that achieve the information-theoretic rate have been developed. Completion of nonnegative rank-1 tensors can be exactly written as a convex optimization problem (Aswani, 2016). For symmetric orthogonal tensors, a variant of the Frank-Wolfe algorithm has been proposed (Rao et al, 2015).…”
Section: Past Approaches To Tensor Completionmentioning
confidence: 99%
See 1 more Smart Citation
“…Methods for non-negative CPD are proposed to address this issue [e.g. ; 35,50,11,4], as such decompositions make results meaningful and interpretable. In recommender system problems, however, the direct interpretation of latent factors is less critical, as the relative scale or ranking is important for recommendation.…”
Section: Context-aware Recommender Systemsmentioning
confidence: 99%