2013
DOI: 10.1007/s10994-013-5366-3
|View full text |Cite
|
Sign up to set email alerts
|

Learning with tensors: a framework based on convex optimization and spectral regularization

Abstract: We present a framework based on convex optimization and spectral regularization to perform learning when feature observations are multidimensional arrays (tensors). We give a mathematical characterization of spectral penalties for tensors and analyze a unifying class of convex optimization problems for which we present a provably convergent and scalable template algorithm. We then specialize this class of problems to perform learning both in a transductive as well as in an inductive setting. In the transductiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
163
0
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
7
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 197 publications
(171 citation statements)
references
References 65 publications
0
163
0
1
Order By: Relevance
“…In addition, we believe that the folded-concave penalization framework should also yield efficient and robust approaches to more general tensor optimization problems, such as low-rank tensor learning problems [14], and tensor robust principal analysis (tensor RPCA) [41].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, we believe that the folded-concave penalization framework should also yield efficient and robust approaches to more general tensor optimization problems, such as low-rank tensor learning problems [14], and tensor robust principal analysis (tensor RPCA) [41].…”
Section: Resultsmentioning
confidence: 99%
“…Kressner et al [13] proposed a new algorithm that performs Riemannian optimization techniques on the manifold of tensors of fixed multi-linear rank. Signoretto et al [14] studied a learning framework with tensors and developed a hard completion algorithm for the tensor completion problem. Krishnamurthy and Singh [15] developed an efficient algorithm for tensor completion using the adaptive sampling technique.…”
Section: Introductionmentioning
confidence: 99%
“…That means, each tensor data sample can be regarded as an abstract vector [13] whose elements are submatrix types of features. Gathering together the same feature information of different 2 Mathematical Problems in Engineering tensor data sample, we can construct new submatrix training sets and the same number of related training models, from which we can get an equal amount of weight submatrix.…”
Section: Introductionmentioning
confidence: 99%
“…Their algorithms are based on the Douglas-Rachford splitting technique and its dual variant, the alternating direction method of multipliers. Several other efficient algorithms can be found in [2,6,7].…”
Section: Introductionmentioning
confidence: 99%