2020
DOI: 10.48550/arxiv.2005.10483
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Graphical continuous Lyapunov models

Gherardo Varando,
Niels Richard Hansen

Abstract: The linear Lyapunov equation of a covariance matrix parametrizes the equilibrium covariance matrix of a stochastic process. This parametrization can be interpreted as a new graphical model class, and we show how the model class behaves under marginalization and introduce a method for structure learning via 1 -penalized loss minimization. Our proposed method is demonstrated to outperform alternative structure learning algorithms in a simulation study, and we illustrate its application for protein phosphorylatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Proof. This proof follows some ideas from [17,Proposition 2.1]. By matrix calculus [30], we have the following gradient expression for j < i,…”
Section: Discussionmentioning
confidence: 92%
See 2 more Smart Citations
“…Proof. This proof follows some ideas from [17,Proposition 2.1]. By matrix calculus [30], we have the following gradient expression for j < i,…”
Section: Discussionmentioning
confidence: 92%
“…Solving Equation ( 15) can be done via proximal gradient algorithms, which have optimal convergence rates among first-order methods [25] and are tailored for a convex φ but also competitive in the non-convex case [17]. In this work we have used two such smooth loss functions: the negative Gaussian log-likelihood and the triangular Frobenious norm.…”
Section: Penalized Gradient-based Learning Of the Covariance Sparse C...mentioning
confidence: 99%
See 1 more Smart Citation