Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2018
DOI: 10.1109/tnnls.2017.2771264
|View full text |Cite
|
Sign up to set email alerts
|

Parallelized Tensor Train Learning of Polynomial Classifiers

Abstract: In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces. Unfortunately, the use of multivariate polynomials is limited to kernels as in support-vector machines, because polynomials quickly become impractical for high-dimensional problems. In this paper, we effectively overcome the curse of dimensionality by employing the tensor train (TT) format to represent a polynomial classifier. Based on the structure of TTs, two learning algori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
27
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
1

Relationship

3
6

Authors

Journals

citations
Cited by 37 publications
(28 citation statements)
references
References 26 publications
(55 reference statements)
0
27
0
Order By: Relevance
“…Gorodetsky [11] proposed a method based on tensor chain decomposition to solve the optimal solution of the control synthesis problem efficiently. Chen [12] constructed the classifier model based on the tensor chain model. Phien [13] combines data recovery algorithm with tensor chain data, and the application of tense-based algorithm reflects the efficiency advantage of tensor chain decomposition in processing high-order tensors.…”
Section: Related Workmentioning
confidence: 99%
“…Gorodetsky [11] proposed a method based on tensor chain decomposition to solve the optimal solution of the control synthesis problem efficiently. Chen [12] constructed the classifier model based on the tensor chain model. Phien [13] combines data recovery algorithm with tensor chain data, and the application of tense-based algorithm reflects the efficiency advantage of tensor chain decomposition in processing high-order tensors.…”
Section: Related Workmentioning
confidence: 99%
“…This orthogonalisation is also performed on the initialisation of the cores. We refer to Oseledets (2011), Batselier et al (2017) and Chen, Batselier, Suykens, and Wong (2016) for this common step. For this orthogonalisation step, some assumptions are made on the sizes of the cores.…”
Section: Parametrisationmentioning
confidence: 99%
“…On the other hand, there has been an exploding number of works on tensors (a multilinear operator rooted in physics) [6] including their connection and utilization in various engineering fields such as signal processing [7], and lately also in neural networks and machine learning [8], [9], [10]. The power of tensors lies in their ability to lift the curse of dimensionality, reducing computational and storage complexity from exponential to linear cost.…”
Section: Introductionmentioning
confidence: 99%