2015
DOI: 10.1109/tsp.2015.2417491
|View full text |Cite
|
Sign up to set email alerts
|

Subspace Learning and Imputation for Streaming Big Data Matrices and Tensors

Abstract: Extracting latent low-dimensional structure from high-dimensional data is of paramount importance in timely inference tasks encountered with 'Big Data' analytics. However, increasingly noisy, heterogeneous, and incomplete datasets as well as the need for real-time processing of streaming data pose major challenges to this end. In this context, the present paper permeates benefits from rank minimization to scalable imputation of missing data, via tracking low-dimensional subspaces and unraveling latent (possibl… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
198
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 174 publications
(200 citation statements)
references
References 39 publications
(98 reference statements)
2
198
0
Order By: Relevance
“…Convergence of SOMF Similar to [15,34], we show that the sequence of iterates (D t ) t asymptotically reaches a critical point of the empirical risk (3). We introduce the same hypothesis on the code covariance estimationC t as in [15] and a similar one on G tthey ensure strong convexity of the surrogate and boundedness of (α t ) t .…”
Section: Convergence Analysissupporting
confidence: 72%
See 1 more Smart Citation
“…Convergence of SOMF Similar to [15,34], we show that the sequence of iterates (D t ) t asymptotically reaches a critical point of the empirical risk (3). We introduce the same hypothesis on the code covariance estimationC t as in [15] and a similar one on G tthey ensure strong convexity of the surrogate and boundedness of (α t ) t .…”
Section: Convergence Analysissupporting
confidence: 72%
“…1 Note that we solve the fully observed problem despite the use of subsampled data, unlike other recent work on low-rank factorization [34].…”
Section: A Subsampled Online Matrix Factorizationmentioning
confidence: 99%
“…Additionally, interpretable classifiers that can handle massive and skewed data are of interest. -We must develop methods for processing and classifying big data in form of graphs, xml structures, video sequences, hyperspectral images, associations, tensors etc [7,37]. Such data types are becoming more and more frequent in both imbalanced and big data analytics and impose certain restrictions on machine learning systems.…”
Section: Imbalanced Big Datamentioning
confidence: 99%
“…Most recently, low-rank matrix plays a more and more central role in large-scale data analysis and dimensionality reduction [8,87]. The problem of recovering a low-rank matrix is a fundamental problem with applications in machine learning [88].…”
Section: Possible Remediesmentioning
confidence: 99%