2020
DOI: 10.1109/access.2020.3027064
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Learning of Latent Forests

Abstract: In the analysis of real-world data, it is useful to learn a latent variable model that represents the data generation process. In this setting, latent tree models are useful because they are able to capture complex relationships while being easily interpretable. In this paper, we propose two incremental algorithms for learning forests of latent trees. Unlike current methods, the proposed algorithms are based on the variational Bayesian framework, which allows them to introduce uncertainty into the learning pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 51 publications
(48 reference statements)
0
4
0
Order By: Relevance
“…We ran 1000 iterations of the sampler using the Python implementation available at https://github.com/ ivaleraM/GLFM. • Incremental learner (IL), 50 which hill-climbs the space of latent forests in a two-phase iterative process. In its first phase, the forest structure is incremented with a new arc or latent variable.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We ran 1000 iterations of the sampler using the Python implementation available at https://github.com/ ivaleraM/GLFM. • Incremental learner (IL), 50 which hill-climbs the space of latent forests in a two-phase iterative process. In its first phase, the forest structure is incremented with a new arc or latent variable.…”
Section: Methodsmentioning
confidence: 99%
“…In the local VB-SEM algorithm, we allow adding, removing, or reversing arcs that contain any of the variables considered by the latent operator. Once the structure has been learned, the local VB-EM algorithm 50 estimates the parameters of those variables belonging to the Markov blankets (MBs) of: (i) variables initially selected by the latent operator, and (ii) variables affected by a structure change (e.g., a new incoming arc). For any variable in a BN, its MB consists of the set of all its parents, children, and spouses (parents of children) in the network.…”
Section: Local Vb-semmentioning
confidence: 99%
“…VBEM iterates over the VB-E and VB-M steps until the difference in ELBO becomes smaller than a given threshold, indicating convergence. A revised version called p-ELBO was proposed by Rodriguez-Sanchez et al (2020) that includes a penalty term to avoid the |L i |! equivalent ways of assigning sets of parameters that result in the same distribution (non-identifiability), and it is defined as p…”
Section: Variational Bayesian Expectation-maximization (Vbem) Algorithmmentioning
confidence: 99%
“…The ELBO score was extended to p-ELBO by Rodriguez-Sanchez et al (2020;2022), which was used as the objective score in Constrained Incremental Learner (CIL) and Greedy Latent Structure Learner (GLSL) algorithms. CIL learns a tree-structured BN that assumes any two nodes are connected by one directed path only, whereas GLSL learns a DAG BN.…”
Section: Past Relevant Workmentioning
confidence: 99%