In recent years, Bayesian networks became a popular framework to estimate the dependency structure of a set of variables. However, due to the NP-hardness of structure learning, this is a challenging task and typical state-of-the art algorithms fail to learn in domains with several thousands of variables. In this paper we introduce a novel algorithm, called substructure learning, that reduces the complexity of learning large networks by splitting this task into several small subtasks. Instead of learning one complete network, we estimate the network structure iteratively by learning small subnetworks. Results from several benchmark cases show that substructure learning efficiently reconstructs the network structure in large domains with high accuracy.
Pure W 6 Cl 18 was synthesized after two methods, by oxidizing W 6 Cl 12 with CCl 4 in an autoclave, and by reaction of W 6 Cl 12 in a chlorine gas flow. At temperatures above 400°C and under atmospheric pressure W 6 Cl 18 transforms into W 6 Cl 12 . The crystal structure of W 6 Cl 18 was refined after the Rietveld method on X-ray powder data. The unusual electronic conditions of the 18 electron cluster [W 6 Cl 12 ]Cl 6 are compared with those of the electron-precise 24 electron cluster [W 6 Cl 8 ]Cl 4 . The compound exhibits paramagnetic behaviour with two electrons in antibonding energy levels.
Bayesian networks (BNs) are knowledge representation tools capable of representing dependence or independence relationships among random variables that compose a problem domain. Bayesian networks learned from data sets are receiving increasing attention within the community of researchers of uncertainty in artificial intelligence, due to their capacity to provide good inference models and to discover the structure of complex domains. One approach to learning BNs from data is to use a scoring metric to evaluate the fitness of any given candidate network for the database, and apply an optimization procedure to explore the set of candidate networks. Among the most frequently used optimization methods for this purpose is greedy search, either deterministic or stochastic.This article proposes a hybrid Bayesian network learning algorithm MMACO, based on the local discovery algorithm Max-Min Parents and Children (MMPC) and ant colony optimization (ACO). MMPC is used to construct the skeleton of the Bayesian network and then ACO is used to orientate its edges, thus returning the final structure. We apply MMACO (Max-Min ACO) to several sets of benchmark networks and show that it outperforms greedy search (GS) and simulated annealing (SA) algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.