2019
DOI: 10.1109/lsp.2019.2915000
|View full text |Cite
|
Sign up to set email alerts
|

Kullback–Leibler Divergence Between Multivariate Generalized Gaussian Distributions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(14 citation statements)
references
References 16 publications
0
12
0
Order By: Relevance
“…Despite not being directly implemented in modern software packages (such as Matlab, Mathematica, Maple, etc. ), their computation (efficiently performed by numerical calculation of the inverse Laplace transform, for example, exhaustively discussed in [47][48][49][50]), including truncation errors, and the required number of summands and achievable computational gain (relative to the numeric integration) are frequently discussed in the literature (see, for instance, [16,28,32,51,52]).…”
Section: Discussion and Further Generalizationmentioning
confidence: 99%
“…Despite not being directly implemented in modern software packages (such as Matlab, Mathematica, Maple, etc. ), their computation (efficiently performed by numerical calculation of the inverse Laplace transform, for example, exhaustively discussed in [47][48][49][50]), including truncation errors, and the required number of summands and achievable computational gain (relative to the numeric integration) are frequently discussed in the literature (see, for instance, [16,28,32,51,52]).…”
Section: Discussion and Further Generalizationmentioning
confidence: 99%
“…The variational approximation to the Bayesian posterior distribution on the weights is a feasible method. Variational learning finds the parameters of a distribution on the weights that minimizes the Kullback–Leibler (KL) divergence [ 63 ] with the true Bayesian posterior on the weights: …”
Section: Distributed Deep Fusion Predictormentioning
confidence: 99%
“…First, we need to build a Q value network and a policy network. The Q-value network outputs single-value Q through several layers of neural networks, and the policy network outputs a Gaussian distribution [29]. In this process, the neural network will be updated.…”
Section: Soft Actor-criticmentioning
confidence: 99%