Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing 2019
DOI: 10.1145/3313276.3316372
|View full text |Cite
|
Sign up to set email alerts
|

Learning restricted Boltzmann machines via influence maximization

Abstract: Graphical models are a rich language for describing high-dimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent variables. Here we study Restricted Boltzmann Machines (or RBMs), which are a popular model with wide-ranging applications in dimensionality reduction, collaborative filtering, topic modeling… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 14 publications
(27 citation statements)
references
References 28 publications
0
27
0
Order By: Relevance
“…Compared with sigmoid reliability network, the learning of RBM weights is more simple and easy. The weights of the generated model are obtained in advance by using the greedy layer-by-layer learning method without tutors [30]- [32]. The learning process is to map the visible vector value to the hidden layer unit, then reconstruct the visual layer unit by using the hidden layer unit, and then map the visual layer unit to the hidden layer unit again by using the visual layer unit.…”
Section: E Restricted Boltzmann Machinesmentioning
confidence: 99%
“…Compared with sigmoid reliability network, the learning of RBM weights is more simple and easy. The weights of the generated model are obtained in advance by using the greedy layer-by-layer learning method without tutors [30]- [32]. The learning process is to map the visible vector value to the hidden layer unit, then reconstruct the visual layer unit by using the hidden layer unit, and then map the visual layer unit to the hidden layer unit again by using the visual layer unit.…”
Section: E Restricted Boltzmann Machinesmentioning
confidence: 99%
“…In [32], a consistent graph-learning algorithm is proposed under the assumption that the adjacency graph matrix is sparse, and that the error matrix associated to the latent-variables is low-rank. In [33], an approach based on influence maximization is adopted to establish when the graph learning problem is feasible over the class of restricted Boltzmann machines.…”
Section: A Learning Graphical Modelsmentioning
confidence: 99%
“…There is a large literature on structure learning (see [38,45,8,26,49,10,25,20,41] and references therein), but prediction-centric learning remains poorly understood. In this paper, we focus on tree Ising models.…”
Section: Introductionmentioning
confidence: 99%