2020
DOI: 10.1093/imaiai/iaaa028
|View full text |Cite
|
Sign up to set email alerts
|

Faster Johnson–Lindenstrauss transforms via Kronecker products

Abstract: The Kronecker product is an important matrix operation with a wide range of applications in signal processing, graph theory, quantum computing and deep learning. In this work, we introduce a generalization of the fast Johnson–Lindenstrauss projection for embedding vectors with Kronecker product structure, the Kronecker fast Johnson–Lindenstrauss transform (KFJLT). The KFJLT reduces the embedding cost by an exponential factor of the standard fast Johnson–Lindenstrauss transform’s cost when applied to vectors wi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(25 citation statements)
references
References 26 publications
0
20
0
Order By: Relevance
“…An alternative approach would be to incorporate matrix sketching techniques to allow for more efficient computation of the SVD in the tensorized domain. For example, recent work that focuses on sketches for matrices with Kronecker product structure, such as TensorSketch [30] and its modifications [1] or the Kronecker fast Johnson--Lindenstrauss transform [21], could potentially be adapted to our setting.…”
Section: Discussionmentioning
confidence: 99%
“…An alternative approach would be to incorporate matrix sketching techniques to allow for more efficient computation of the SVD in the tensorized domain. For example, recent work that focuses on sketches for matrices with Kronecker product structure, such as TensorSketch [30] and its modifications [1] or the Kronecker fast Johnson--Lindenstrauss transform [21], could potentially be adapted to our setting.…”
Section: Discussionmentioning
confidence: 99%
“…They were first analyzed by Rudelson (2012). The paper (Sun, Guo, Tropp and Udell 2018) proposed the application of tensor random embeddings for randomized linear algebra; some extensions appear in (Jin, Kolda and Ward 2019) and (Malik and Becker 2019). See (Baldi and Vershynin 2019) and (Vershynin 2019) for related theoretical results.…”
Section: Structured Random Embeddingsmentioning
confidence: 99%
“…More general n-stage modewise operators can be defined similarly. First analyzed in [21,23] for aiding in the rapid computation of the CP decomposition, such modewise compression operators offer a wide variety of computational advantages over standard vector-based approaches (in which R 1 is a vectorization operator so that d " 1, A 1 " A P R mˆś d j n j is a standard Johnson-Lindenstrauss map, and all remaining operators R 2 , B 1 , . .…”
Section: Introduction and Prior Workmentioning
confidence: 99%
“…In particular, when R 1 is a more modest reshaping (or even the identity) the resulting modewise linear transforms can be formed using significantly fewer random variables (effectively, independent random bits), and stored using less memory by avoiding the use of a single massive mˆś d j n j matrix. In addition, such modewise linear operators also offer trivially parallelizable operations, faster serial data evaluations than standard vectorized approaches do for structured data (see, e.g., [23]), and the ability to better respect the multimodal structure of the given tensor data.…”
Section: Introduction and Prior Workmentioning
confidence: 99%
See 1 more Smart Citation