2016
DOI: 10.1561/2200000059
|View full text |Cite
|
Sign up to set email alerts
|

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions

Abstract: Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning algorithms typically scale exponentially with data volume and complexity of cross-modal couplings - the so called curse of dimensionality - which is prohibitive to the analysis of large-scale, multi-modal and multi-relational datasets. Given that such data are often efficiently represented as multiway arrays or tenso… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
332
0
4

Year Published

2019
2019
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 356 publications
(337 citation statements)
references
References 205 publications
1
332
0
4
Order By: Relevance
“…Now, dimensionality reduction is an essential element of the engineering (the "practical man") approach to mathematical modeling [4]. Many model reduction methods were developed and successfully implemented in applications, from various versions of principal component analysis to approximation by manifolds, graphs, and complexes [5][6][7], and low-rank tensor network decompositions [8,9].Various reasons and forms of the curse of dimensionality were classified and studied, from the obvious combinatorial explosion (for example, for n binary Boolean attributes, to check all the combinations of values we have to analyze 2 n cases) to more sophisticated distance concentration: in a high-dimensional space, the distances between randomly selected points tend to concentrate near their mean value, and the neighbor-based methods of data analysis become useless in their standard forms [10,11]. Many "good" polynomial time algorithms become useless in high dimensions.Surprisingly, however, and despite the expected challenges and difficulties, common-sense heuristics based on the simple and the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems [12].…”
mentioning
confidence: 99%
“…Now, dimensionality reduction is an essential element of the engineering (the "practical man") approach to mathematical modeling [4]. Many model reduction methods were developed and successfully implemented in applications, from various versions of principal component analysis to approximation by manifolds, graphs, and complexes [5][6][7], and low-rank tensor network decompositions [8,9].Various reasons and forms of the curse of dimensionality were classified and studied, from the obvious combinatorial explosion (for example, for n binary Boolean attributes, to check all the combinations of values we have to analyze 2 n cases) to more sophisticated distance concentration: in a high-dimensional space, the distances between randomly selected points tend to concentrate near their mean value, and the neighbor-based methods of data analysis become useless in their standard forms [10,11]. Many "good" polynomial time algorithms become useless in high dimensions.Surprisingly, however, and despite the expected challenges and difficulties, common-sense heuristics based on the simple and the most straightforward methods "can yield results which are almost surely optimal" for high-dimensional problems [12].…”
mentioning
confidence: 99%
“…As one of the most powerful numerical tools for studying quantum manybody systems [6][7][8][9], tensor networks (TNs) have drawn more attention. For instance, TNs have been recently applied to solve machine learning problems such as dimensionality reduction [10,11] and handwriting recognition [12,13]. Just as a TN allows the numerical treatment of difficult physical systems by providing layers of abstraction, deep learning achieved similar striking advances in automated feature extraction and pattern recognition using a hierarchical representation [14].…”
Section: Introductionmentioning
confidence: 99%
“…The development of new efficient methods and algorithms for tensor decomposition as a basic tool for dimensionality reduction of multidimensional data in processing and analyses that could appear in data mining tasks (big data analysis, machine/deep learning, statistics, scientific, and cloud computing, compression of image sequences, etc.) is certainly a question of present interest which attracts significant scientific research efforts (Bergqvist & Larsson, 2010;Cichocki et al, 2015Cichocki et al, , 2016De Lathauwer, De Moor, & Vandewalle, 2000;Kolda & Bader, 2009;Liu & Wang, 2017;Lu, Plataniotis, & Venetsanopoulos, 2008;Ozdemir, Iwen, & Aviyente, 2016a;Rabanser, Shchur, & Günnemann, 2017;Zhang, Ely, Aeron, Hao, & Kilmer, 2014). Hunyadi, Dupont, Paesschen, and Huffel (2017) studied the functional magnetic resonance imaging and electroencephalography as a mixture of ongoing neural processes, physiological and nonphysiological noise.…”
Section: Introductionmentioning
confidence: 99%
“…One of the main approaches for tensor decomposition is based on the algorithms for multilinear singular value decomposition (SVD) (De Lathauwer et al, 2000) also called the higher-order SVD (HOSVD) (Bergqvist & Larsson, 2010), the multiscale HOSVD (MS-HOSVD) (Ozdemir et al, 2016a), the canonical polyadic (CP) decomposition, the Tucker decomposition, and their derivatives (Cichocki et al, 2015;Kolda & Bader, 2009;Rabanser et al, 2017), the multilinear principal component analysis (MPCA) (Lu et al, 2008), and tensor networks (Cichocki et al, 2016). The general feature of these decompositions is that they are all based on the eigen vectors of matrices or vector sequences got in result of various tensor transforms through reshaping operations: matricization, vectorization, and tensorization (Cichocki et al, 2016). These decompositions give full decorrelation of the tensor elements (entries) and in result, their energy is concentrated in their first decomposition components.…”
Section: Introductionmentioning
confidence: 99%