Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Learning in Graphical Models 1998
DOI: 10.1007/978-94-011-5014-9_5
|View full text |Cite
|
Sign up to set email alerts
|

An Introduction to Variational Methods for Graphical Models

Abstract: Abstract. This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models. We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, showing how upper and lower bounds can be found for local probabilities, and discussing methods fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
2,133
1
6

Year Published

2001
2001
2007
2007

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 1,585 publications
(2,147 citation statements)
references
References 25 publications
7
2,133
1
6
Order By: Relevance
“…As mentioned previously, in graphical models (or Bayesian networks, in particular) a node represents a random variable and links specify the dependency relationships between these variables [22]. The states of the random variables can be hidden in the sense that they are not directly observable, but it is assumed that they may have observations related to the state values.…”
Section: Belief Propagationmentioning
confidence: 99%
“…As mentioned previously, in graphical models (or Bayesian networks, in particular) a node represents a random variable and links specify the dependency relationships between these variables [22]. The states of the random variables can be hidden in the sense that they are not directly observable, but it is assumed that they may have observations related to the state values.…”
Section: Belief Propagationmentioning
confidence: 99%
“…1. For general graphical models, the term decomposable and the term triangulated have their own meanings (they are actually equivalent properties [21]). Here, we use the term decomposable triangulated specifically for the graph type defined in this paragraph.…”
Section: Detection and Labelingmentioning
confidence: 99%
“…For general graphical models, the labeling problem is the most-probable-configuration problem on the graph and can be solved through max-propagation on junction trees [18], [21], [28]. The dynamic programming algorithm [2] and the max-propagation algorithm essentially have the same order of complexity which is determined by the maximum clique size of the graph.…”
Section: Decomposable Triangulated Graphs and General Graphical Modelsmentioning
confidence: 99%
“…(2) is not tractable, we develop a generalised EM (GEM) algorithm (see e.g. [6]), with approximate E-step. In general terms, for each data point t n , its log-likelihood can be bounded as follows:…”
Section: Structured Variational Em Solutionmentioning
confidence: 99%
“…a fully factorial form [6] is the most common choice. However, under our model definitions, it is feasible to keep some of the posterior dependencies by choosing the following tree-structured…”
Section: Structured Variational Em Solutionmentioning
confidence: 99%