2022
DOI: 10.1613/jair.1.13138
|View full text |Cite
|
Sign up to set email alerts
|

Learning Bayesian Networks Under Sparsity Constraints: A Parameterized Complexity Analysis

Abstract: We study the problem of learning the structure of an optimal Bayesian network when additional constraints are posed on the network or on its moralized graph. More precisely, we consider the constraint that the network or its moralized graph are close, in terms of vertex or edge deletions, to a sparse graph class Π. For example, we show that learning an optimal network whose moralized graph has vertex deletion distance at most k from a graph with maximum degree 1 can be computed in polynomial time when k is con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 19 publications
0
12
0
Order By: Relevance
“…In the past 11 years, many researchers have been working on BN structure learning and some progress has achieved. Grüttemeier et al 1 proposed a BN structure learning algorithm with structural constraints. Gu et al 2 learnt large-scale BNs (>50 nodes) by partitioning nodes into clusters to learn subgraphs separately and combining all these subgraph to obtain the final complete BN.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In the past 11 years, many researchers have been working on BN structure learning and some progress has achieved. Grüttemeier et al 1 proposed a BN structure learning algorithm with structural constraints. Gu et al 2 learnt large-scale BNs (>50 nodes) by partitioning nodes into clusters to learn subgraphs separately and combining all these subgraph to obtain the final complete BN.…”
Section: Related Workmentioning
confidence: 99%
“…RKGA and BRKGA contain mutation operator and cross operator the same as GA, but RKGA and BRKGA add a block called decoder to convert random keys into initial solutions to optimization problem. Random keys are consisting of real numbers in the interval [0,1] and its length is the same as the final solution. And the decoder can be designed according to different optimization problems, which can be very simple or act as a local optimizer.…”
Section: Biased Random-key Genetic Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, even very restricted special cases of BNSL remain NPhard, for example the case where every possible parent set has constant size (Ordyniak and Szeider 2013). Finally, restricting the topology of the DAG to sparse classes such as graphs of bounded treewidth or bounded degree leads to NPhard learning problems as well (Korhonen andParviainen 2013, 2015;Grüttemeier and Komusiewicz 2020).…”
Section: Introductionmentioning
confidence: 99%