2019
DOI: 10.1609/aaai.v33i01.33017816
|View full text |Cite
|
Sign up to set email alerts
|

Fast Relational Probabilistic Inference and Learning: Approximate Counting via Hypergraphs

Abstract: Counting the number of true instances of a clause is arguably a major bottleneck in relational probabilistic inference and learning. We approximate counts in two steps: (1) transform the fully grounded relational model to a large hypergraph, and partially-instantiated clauses to hypergraph motifs; (2) since the expected counts of the motifs are provably the clause counts, approximate them using summary statistics (in/outdegrees, edge counts, etc). Our experimental results demonstrate the efficiency of these ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…Lowd and Domingos (2007) discuss multiple methods for solving the MLE problem including gradient descent, contrastive divergence, diagonal newton, and conjugate gradient. Recent efforts for scaling discriminative MLE weight learning reduce the search space (Farabi et al 2018) by leveraging symmetries to speed up (Ahmadi et al 2012;Van Haaren et al 2015) or approximate the inference subproblem (Sarkhel et al 2016;Das et al 2016Das et al , 2019. For this discussion and for later experiments, we use the diagonal Newton method (DN ) as the representative MLE weight learner as it was found to be the among the most effective for MLE weight learning in MLNs (Lowd and Domingos 2007) and is the default optimizer for Tuffy (Niu et al 2011).…”
Section: Maximum-likelihood Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Lowd and Domingos (2007) discuss multiple methods for solving the MLE problem including gradient descent, contrastive divergence, diagonal newton, and conjugate gradient. Recent efforts for scaling discriminative MLE weight learning reduce the search space (Farabi et al 2018) by leveraging symmetries to speed up (Ahmadi et al 2012;Van Haaren et al 2015) or approximate the inference subproblem (Sarkhel et al 2016;Das et al 2016Das et al , 2019. For this discussion and for later experiments, we use the diagonal Newton method (DN ) as the representative MLE weight learner as it was found to be the among the most effective for MLE weight learning in MLNs (Lowd and Domingos 2007) and is the default optimizer for Tuffy (Niu et al 2011).…”
Section: Maximum-likelihood Estimationmentioning
confidence: 99%
“…Typically, the weights of the rules are learned through maximizing some form of likelihood function (Bach et al 2013;Lowd and Domingos 2007;Singla and Domingos 2005;Kok and Domingos 2005;Chou et al 2016;Sarkhel et al 2016;Das et al 2016Das et al , 2019Farabi et al 2018). This is a well-motivated approach if the downstream objective makes use of the probability density function directly.…”
Section: Introductionmentioning
confidence: 99%
“…The experimental results obtained showed that the proposed method have better performance when the networks are heterophilous. Das et al [8] proposed fast relational probabilistic inference and learning by using approximate counting via hypergraphs. Their experimental results showed that the efficiency of these approximations allows these models to perform significantly faster than state-of-theart, which can be successfully applied to several complex statistical relational models, without sacrificing effectiveness.…”
Section: Application Of Srlmentioning
confidence: 99%
“…Algorithm 1 outlines the key steps involved in our approach. KCLN() is the main procedure [lines: [1][2][3][4][5][6][7][8][9][10][11][12][13][14] that trains a Column Network using both the data (the knowledge graph G) and the human advice (set of preference rules P). It returns a K-CLN C P where θ P are the network parameters, which are initialized to any arbitrary value (0 in our case; [line: 3]).…”
Section: The Algorithmmentioning
confidence: 99%