2021
DOI: 10.1063/5.0045521
|View full text |Cite
|
Sign up to set email alerts
|

Tensor-train approximation of the chemical master equation and its application for parameter inference

Abstract: In this work, we perform Bayesian inference tasks for the chemical master equation in the tensor-train format. The tensor-train approximation has been proven to be very efficient in representing high dimensional data arising from the explicit representation of the chemical master equation solution. An additional advantage of representing the probability mass function in the tensor train format is that parametric dependency can be easily incorporated by introducing a tensor product basis expansion in the parame… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 50 publications
0
10
0
Order By: Relevance
“…Numerical experiments have been performed to showcase the advantages of the proposed framework in terms of accuracy and computational efficiency Ion et al (2021). Among them, we present here only he SEIQR model.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Numerical experiments have been performed to showcase the advantages of the proposed framework in terms of accuracy and computational efficiency Ion et al (2021). Among them, we present here only he SEIQR model.…”
Section: Resultsmentioning
confidence: 99%
“…A probabilistic description of the distribution over the parameter space (called posterior) can be obtained using Bayes rule. As presented in Ion et al (2021), updating the posterior implies solving the CME and constructing the likelihood (conditional probability of observing the data given the underlying state of the system). Both of the steps are efficiently performed using the TT-format without being affected by the curse of dimensionality, since both the observation model and the CME operator can be computed directly in the TT-format.…”
Section: Inference Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…Given such an property, one can further conduct an estimation on choosing the proper N to control the truncation error [30], and then N can be adjusted accordingly for the neural network without losing the flexibility on learning the joint distribution. Besides, compared with the method based on the tensor network [33,34], the advantages of the VAN mainly include its generality. The neural-network ansatz is more flexible to represent complex probability distributions.…”
Section: Discussionmentioning
confidence: 99%
“…Another class of methods truncate the CME into a state space covering the majority of the probability distribution, including the finite state projection method [26,27], the sliding window method [28], and the ACME method [29,30] based on the finite-buffer technique [31]. Further advances employ the Krylov subspace approximation [32] and the tensor-train representations [33,34]. However, computational cost of these methods is still prohibitive to reach high accuracy when both the number (type) and counts of species become large [35].…”
Section: Introductionmentioning
confidence: 99%