2018
DOI: 10.1093/imaiai/iay017
|View full text |Cite
|
Sign up to set email alerts
|

Matrix decompositions using sub-Gaussian random matrices

Abstract: In recent years, several algorithms, which approximate matrix decomposition, have been developed. These algorithms are based on metric conservation features for linear spaces of random projection types. We show that an i.i.d sub-Gaussian matrix with large probability to have zero entries is metric conserving. We also present a new algorithm, which achieves with high probability, a rank r decomposition approximation for an m×n matrix that has an asymptotic complexity like state-of-the-art algorithms. We derive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 39 publications
(81 reference statements)
0
11
0
Order By: Relevance
“…For the real data sets applying the multi-view approach requires an eigen decomposition of large matrices. To reduce the runtime of experiments we us an approximate matrix decomposition based on sparse random projections [43].…”
Section: B Multi-view Clusteringmentioning
confidence: 99%
See 1 more Smart Citation
“…For the real data sets applying the multi-view approach requires an eigen decomposition of large matrices. To reduce the runtime of experiments we us an approximate matrix decomposition based on sparse random projections [43].…”
Section: B Multi-view Clusteringmentioning
confidence: 99%
“…8, the views X and Y , which were generated by Eqs. (43) and (44), are presented. Color and shape indicate the ground truth clusters.…”
Section: B Multi-view Clusteringmentioning
confidence: 99%
“…However, as the support should be determined such that the least-squares matrix is invertible we get thatĨ ∝ O(d m ). Thus, we can use a randomized rank d SVD implementation such as the one detailed in [1] and reduce the complexity of this step to O(n ·Ĩ) +Õ(n · d 2 ), whereÕ neglects logarithmic factors of d. Plugging in the estimated size ofĨ, we get that the overall complexity of Step 1 amounts to O(n · d m ) Corollary 3.10. The overall complexity for the projection of a given point r onto the approximating manifold is O(n · d m + d 3m ).…”
Section: Complexity Of the Mmls Projectionmentioning
confidence: 99%
“…Hence, it is often possible to zero-out the small values of W k i,j , allowing for cheaper sparse-matrix computations. Additionally, computing the eigen-decomposition in step 6 for large datasets (large N ) may be accomplished more efficiently using randomized methods [17,2].…”
Section: Algorithms Summary and Computational Costmentioning
confidence: 99%