2016
DOI: 10.1137/15m1028479
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions

Abstract: We propose a new method for low-rank approximation of Moore-Penrose pseudoinverses (MPPs) of large-scale matrices using tensor networks. The computed pseudoinverses can be useful for solving or preconditioning of large-scale overdetermined or underdetermined systems of linear equations. The computation is performed efficiently and stably based on the modified alternating least squares (MALS) scheme using low-rank tensor train (TT) decompositions and tensor network contractions. The formulated large-scale optim… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 43 publications
0
16
0
Order By: Relevance
“…The related optimization problems often involve structured matrices and vectors with over a billion entries (see [67,81,87] and references therein). In particular, we focus on Symmetric Eigenvalue Decomposition (EVD/PCA) and Generalized Eigenvalue Decomposition (GEVD) [70,120,123], SVD [127], solutions of overdetermined and undetermined systems of linear algebraic equations [71,159], the Moore-Penrose pseudo-inverse of structured matrices [129], and Lasso problems [130]. Tensor networks for extremely large-scale multi-block (multiview) data are also discussed, especially TN models for orthogonal Canonical Correlation Analysis (CCA) and related Partial Least Squares (PLS) problems.…”
Section: Scope and Objectivesmentioning
confidence: 99%
“…The related optimization problems often involve structured matrices and vectors with over a billion entries (see [67,81,87] and references therein). In particular, we focus on Symmetric Eigenvalue Decomposition (EVD/PCA) and Generalized Eigenvalue Decomposition (GEVD) [70,120,123], SVD [127], solutions of overdetermined and undetermined systems of linear algebraic equations [71,159], the Moore-Penrose pseudo-inverse of structured matrices [129], and Lasso problems [130]. Tensor networks for extremely large-scale multi-block (multiview) data are also discussed, especially TN models for orthogonal Canonical Correlation Analysis (CCA) and related Partial Least Squares (PLS) problems.…”
Section: Scope and Objectivesmentioning
confidence: 99%
“…This chapter introduces feasible solutions for several generic huge-scale dimensionality reduction and related optimization problems, whereby the involved optimized cost functions are approximated by suitable low-rank TT networks. In this way, a very large-scale optimization problem can be converted into a set of much smaller optimization sub-problems of the same kind [Cichocki, 2014, Holtz et al, 2012a, Kressner et al, 2014a, Lee and Cichocki, 2016b, Schollwöck, 2011, which can be solved using standard methods.…”
Section: Chaptermentioning
confidence: 99%
“…In particular, we focus on Symmetric Eigenvalue Decomposition (EVD/PCA) and Generalized Eigenvalue Decomposition (GEVD) , Hubig et al, 2015, Huckle and Waldherr, 2012, Kressner et al, 2014a, SVD [Lee and Cichocki, 2015], solutions of overdetermined and undetermined systems of linear algebraic equations [Dolgov andSavostyanov, 2014, Oseledets and, the MoorePenrose pseudo-inverse of structured matrices [Lee and Cichocki, 2016b], and LASSO regression problems [Lee and Cichocki, 2016a]. Tensor networks for extremely large-scale multi-block (multi-view) data are also discussed, especially TN models for orthogonal Canonical Correlation Analysis (CCA) and related Higher-Order Partial Least Squares (HOPLS) problems [Hou, 2017, Hou et al, 2016b, Zhao et al, 2011.…”
Section: Chaptermentioning
confidence: 99%
See 2 more Smart Citations