2020
DOI: 10.1109/tsp.2020.2982321
|View full text |Cite
|
Sign up to set email alerts
|

Block-Randomized Stochastic Proximal Gradient for Low-Rank Tensor Factorization

Abstract: This work considers the problem of computing the canonical polyadic decomposition (CPD) of large tensors. Prior works mostly leverage data sparsity to handle this problem, which is not suitable for handling dense tensors that often arise in applications such as medical imaging, computer vision, and remote sensing. Stochastic optimization is known for its low memory cost and per-iteration complexity when handling dense data. However, exisiting stochastic CPD algorithms are not flexible enough to incorporate a v… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
70
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 43 publications
(71 citation statements)
references
References 56 publications
1
70
0
Order By: Relevance
“…In order to improve the convergence speed of the conventional PARAFAC-ALS algorithm, a wide range of remedies have been developed, such as line search [ 52 ], enhanced line search [ 53 ], extrapolation with optimized step size and search direction [ 54 ] and compression [ 46 ]. Besides the ALS algorithm, some alternatives for calculating PARAFAC models have also been proposed, including the all-at-once algorithm [ 55 ], the hierarchical conjugate gradient algorithm [ 56 ], a random gradient algorithm [ 57 , 58 ], the fast damped Gauss–Newton algorithm [ 52 ], etc. These new alternatives were reported to have better convergence speed for many real cases.…”
Section: Multi-way Modelsmentioning
confidence: 99%
“…In order to improve the convergence speed of the conventional PARAFAC-ALS algorithm, a wide range of remedies have been developed, such as line search [ 52 ], enhanced line search [ 53 ], extrapolation with optimized step size and search direction [ 54 ] and compression [ 46 ]. Besides the ALS algorithm, some alternatives for calculating PARAFAC models have also been proposed, including the all-at-once algorithm [ 55 ], the hierarchical conjugate gradient algorithm [ 56 ], a random gradient algorithm [ 57 , 58 ], the fast damped Gauss–Newton algorithm [ 52 ], etc. These new alternatives were reported to have better convergence speed for many real cases.…”
Section: Multi-way Modelsmentioning
confidence: 99%
“…In recent year, a fiber sampling [14,19,21] strategy was used in Euclidean loss based tensor decomposition and completion to reduce the complexity and memory burdens. In [14,19], fiber samplingbased stochastic CPD algorithms select samples i that are related to a single latent factor An and updates An based on the selected fiber samples per iteration. In the context of β-divergence-based CPD, the sampling strategy admits a couple of notable advantages:…”
Section: Fiber Sampling and Block Structurementioning
confidence: 99%
“…• Incorporating Prior on An: Randomly sampling some indexes i [22] or selecting a subtensor [13] faces an issue that samples may relate to only parts of An. Useful prior information about the entire latent factor (e.g., column norm constraints) cannot be imposed; see [14,15] for more discussion. Fiber sampling does not have this challenge.…”
Section: Fiber Sampling and Block Structurementioning
confidence: 99%
See 2 more Smart Citations