2017
DOI: 10.1109/tsipn.2016.2631890
|View full text |Cite
|
Sign up to set email alerts
|

Compressive PCA for Low-Rank Matrices on Graphs

Abstract: Abstract-We introduce a novel framework for an approximate recovery of data matrices which are low-rank on graphs, from sampled measurements. The rows and columns of such matrices belong to the span of the first few eigenvectors of the graphs constructed between their rows and columns. We leverage this property to recover the non-linear low-rank structures efficiently from sampled data measurements, with a low cost (linear in n). First, a Resrtricted Isometry Property (RIP) condition is introduced for efficien… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 47 publications
0
3
0
Order By: Relevance
“…For the 2D tensors (matrices) with Gaussian noise we compare GSVD with simple SVD. Finally for the 2D matrix with sparse noise we compare TRPCAG with Robust PCA (RPCA) [2], Robust PCA on Graphs (RPCAG) [24], Fast Robust PCA on Graphs (FRPCAG) [25] and Compressive PCA (CPCA) [26]. Not all the methods are tested on all the datasets due to computational reasons.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the 2D tensors (matrices) with Gaussian noise we compare GSVD with simple SVD. Finally for the 2D matrix with sparse noise we compare TRPCAG with Robust PCA (RPCA) [2], Robust PCA on Graphs (RPCAG) [24], Fast Robust PCA on Graphs (FRPCAG) [25] and Compressive PCA (CPCA) [26]. Not all the methods are tested on all the datasets due to computational reasons.…”
Section: Resultsmentioning
confidence: 99%
“…Compressive PCA for low-rank matrices on graphs (CPCA) [26] 1) Sample the matrix Y by a factor of s r along the rows and s c along the columns as Ŷ = M r Y M c and solve FRPCAG to get X, 2) do the SVD of X = Û1 R Û 2 recovered from step 1, 3) decode the low-rank X for the full dataset Y by solving the subspace upsampling problems below: : 0.1 : show the reconstruction error for 30 singular values, 3rd plots show the subspace angle between the first 5 singular vectors, 4th plot shows the computation time, 5th plot shows the memory requirement (size of original tensor, memory for GMLSVD, memory for MLSVD) and the rightmost plots show the dimension of the tensor and the core along each mode and hence the amount of compression obtained. Clearly GMLSVD performs better than MLSVD in a low SNR regime.…”
Section: A53 Information On Methods and Parametersmentioning
confidence: 99%
“…A typical, simple approach to solve this problem, is to compute the PCA transform of each image to be classified, and then feed a few of these PCA coefficients to a linear SVM classifier which will make the final decision [43], [44]. Another option is to exploit the low-rank structure of the data [45]. We note that, when computing the PCA coefficients, the entire image is needed.…”
Section: Mnist Handwritten Digits Classificationmentioning
confidence: 99%