2014
DOI: 10.1016/j.ijar.2013.09.012
|View full text |Cite
|
Sign up to set email alerts
|

Learning mixtures of truncated basis functions from data

Abstract: In this paper we investigate methods for learning hybrid Bayesian networks from data. First we utilize a kernel density estimate of the data in order to translate the data into a mixture of truncated basis functions (MoTBF) representation using a convex optimization technique. When utilizing a kernel density representation of the data, the estimation method relies on the specification of a kernel bandwidth. We show that in most cases the method is robust wrt. the choice of bandwidth, but for certain data sets … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
48
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
2

Relationship

4
2

Authors

Journals

citations
Cited by 27 publications
(49 citation statements)
references
References 19 publications
(42 reference statements)
1
48
0
Order By: Relevance
“…This can lead to improvements in inference efficiency, especially using MoTBFs, as they can provide accurate estimations with no need to split the domain of the densities [16]. However, we decided to use just MTEs for the sake of computational efficiency.…”
Section: Discussion and Concluding Remarksmentioning
confidence: 99%
See 1 more Smart Citation
“…This can lead to improvements in inference efficiency, especially using MoTBFs, as they can provide accurate estimations with no need to split the domain of the densities [16]. However, we decided to use just MTEs for the sake of computational efficiency.…”
Section: Discussion and Concluding Remarksmentioning
confidence: 99%
“…In this paper, we have considered the inclusion of only two exponential terms into a fixed number of subintervals of the range of every variable, whilst MoTBFs in general result in an unbounded number of terms without splitting the domain of the variable. We have resorted to the original estimation algorithm in [27], because it requires fewer iterations for learning the parameters than the general MoTBF algorithm [16].…”
Section: Discussion and Concluding Remarksmentioning
confidence: 99%
“…The learning of univariate MoTBFs from data was explored in [5], and we will briefly summarize that approach here in the special case of MoPs. The estimation procedure relies on the empirical cumulative distribution function (CDF) as a representation of the data D = {x 1 , .…”
Section: Univariate Mopsmentioning
confidence: 99%
“…However, even though a Bayesian network model populated with MoTBF distributions requires the specification of both marginal and conditional MoTBF distributions, only limited attention has been given to learning the conditional MoTBF distributions directly from data [1,11]. In this paper we first extend previous work on learning marginal MoTBF distributions [5] to also learn joint densities. These are in turn employed to generate the required conditional MoTBFs.…”
Section: Introductionmentioning
confidence: 97%
See 1 more Smart Citation