2019
DOI: 10.1109/tsp.2018.2887401
|View full text |Cite
|
Sign up to set email alerts
|

Low Rank and Structured Modeling of High-Dimensional Vector Autoregressions

Abstract: Network modeling of high-dimensional time series data is a key learning task due to its widespread use in a number of application areas, including macroeconomics, finance and neuroscience. While the problem of sparse modeling based on vector autoregressive models (VAR) has been investigated in depth in the literature, more complex network structures that involve low rank and group sparse components have received considerably less attention, despite their presence in data. Failure to account for low-rank struct… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
69
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 71 publications
(77 citation statements)
references
References 35 publications
1
69
0
Order By: Relevance
“…Note that the tuning parameters provided in ( 7) are different from the tuning parameters in (6); the log T terms are eliminated, since on the selected stationary segments the optimal tuning parameters are always feasible. Based on analogous results in Agarwal et al (2012) and Basu et al (2019) for models whose parameters admit a low rank and sparse decomposition, the optimal tuning parameters in ( 7) lead to the optimal estimation rate given in the next Theorem.…”
Section: Theoretical Propertiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that the tuning parameters provided in ( 7) are different from the tuning parameters in (6); the log T terms are eliminated, since on the selected stationary segments the optimal tuning parameters are always feasible. Based on analogous results in Agarwal et al (2012) and Basu et al (2019) for models whose parameters admit a low rank and sparse decomposition, the optimal tuning parameters in ( 7) lead to the optimal estimation rate given in the next Theorem.…”
Section: Theoretical Propertiesmentioning
confidence: 99%
“…For example, brain activity data (see Example 1 in Section 6) exhibit low dimensional structure (Schröder & Ombao (2019)) and so do macroeconomic data (Stock & Watson (2016), Example 2 in Section 6). Reduced rank auto-regressive models for stationary high-dimensional data were studied in Basu et al (2019). The key idea of such reduced rank models is that the lead-lagged relationships between the time series can not simply be described by a few sparse components, as is the case for sparse VAR models.…”
Section: Introductionmentioning
confidence: 99%
“…To significantly improve the efficiency of the first step, we make the key observation that the similarity matrix A should be of low-rank. This is due to the fact that the DTW algorithm measures the level of co-movement between time series, which has shown to be dictated by only a small number of latent factors [Stock and Watson, 2005;Basu and Michailidis, 2015]. Indeed, we can verify the lowrankness in another way.…”
Section: A Parameter-free Scalable Algorithmmentioning
confidence: 95%
“…In order to significantly reduce the running time, we follow the setting of matrix completion [Sun and Luo, 2015] by assuming that the similarity matrix A is of low-rank. This is a very natural assumption since DTW algorithm captures the co-movements of time series, which has shown to be driven by only a small number of latent factors [Stock and Watson, 2005;Basu and Michailidis, 2015]. According to the theory of matrix completion, only O(n log n) randomly sampled entries are needed to perfectly recover an n×n low-rank matrix.…”
Section: Introductionmentioning
confidence: 99%
“…The sparse Cholesky parametrization of the covariance matrix naturally models a hidden variable structure [28]- [31] over ordered Gaussian observables (Equation 2). Interpreting the error terms E as latent signal sources, then the model is a sort of restricted GBN.…”
Section: B a Hidden Variable Model Interpretationmentioning
confidence: 99%