2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7952934
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous low-rank component and graph estimation for high-dimensional graph signals: Application to brain imaging

Abstract: We propose an algorithm to uncover the intrinsic low-rank component of a high-dimensional, graph-smooth and grossly-corrupted dataset, under the situations that the underlying graph is unknown. Based on a model with a low-rank component plus a sparse perturbation, and an initial graph estimation, our proposed algorithm simultaneously learns the low-rank component and refines the graph. The refined graph improves the effectiveness of the graph smoothness constraint and increases the accuracy of the low-rank est… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 34 publications
0
11
0
Order By: Relevance
“…The works of [13] and [36] are most related to our work. Different from the method in [13], we focus on the spatiotemporal signals, and hence fully exploit both long and short term correlated property of spatiotemporal signals to facilitate the graph learning.…”
Section: A Related Workmentioning
confidence: 98%
See 1 more Smart Citation
“…The works of [13] and [36] are most related to our work. Different from the method in [13], we focus on the spatiotemporal signals, and hence fully exploit both long and short term correlated property of spatiotemporal signals to facilitate the graph learning.…”
Section: A Related Workmentioning
confidence: 98%
“…Different from the method in [13], we focus on the spatiotemporal signals, and hence fully exploit both long and short term correlated property of spatiotemporal signals to facilitate the graph learning. Rui et al [36] and this work jointly estimate low-rank component and graph structure, while authors in [36] propose a single integrated scheme from [13] and [37] for lack of representation between graph and observed signals. Besides, the formulation and algorithm of [36] are different from this work.…”
Section: A Related Workmentioning
confidence: 99%
“…The graph frequency decomposition of neuroimaging data shows promise for analyzing brain signals and connectivity [23]; see also the numerical test in Section IX-C. For supervised classification of brain states (in response to different visual stimuli), GFT-based dimensionality reduction of functional magnetic resonance imaging (fMRI) data has been shown to outperform state-of-the art reduction techniques relying on PCA or independent component analysis (ICA) [37]; see also [47] for related approaches dealing with electroencephalogram (EEG) data. Results in [37] indicate that the smooth signal prior along with the graph learning approach in [25] yield the best performance for the aforementioned classification task.…”
Section: Relevance To Applicationsmentioning
confidence: 99%
“…Kalofolias et al [30] extended this framework by establishing the link between smoothness and sparsity and adding a regularization term on the degree vector to ensure that each vertex has at least one incident edge. Different variations of these frameworks to handle missing values and sparse outliers in the graph signals were considered in [31,32,33,34,35]. All of the previous works learn unsigned graphs with the exception of [36], where a signed graph is learned by employing signed graph Laplacian defined in [37].…”
Section: Graph Learningmentioning
confidence: 99%
“…This analysis results in an optimization problem where a graph is learned such that variation of signals over the learned graph is minimized. Di↵erent variations of this framework with constraints on the learned topology and for handling noisy graph signals were considered in (Kalofolias, 2016;Hou et al, 2016;Berger et al, 2020;Kadambari and Chepuri, 2020;Rui et al, 2017). All of the previous works learn unsigned graphs with the exception of (Matz and Dittrich, 2020), where a signed graph is learned by employing signed graph Laplacian defined by Kunegis et al (2010).…”
Section: Introductionmentioning
confidence: 99%