2006
DOI: 10.1016/j.ijar.2006.01.002
|View full text |Cite
|
Sign up to set email alerts
|

Supervised classification with conditional Gaussian networks: Increasing the structure complexity from naive Bayes

Abstract: Most of the Bayesian network-based classifiers are usually only able to handle discrete variables. However, most real-world domains involve continuous variables. A common practice to deal with continuous variables is to discretize them, with a subsequent loss of information. This work shows how discrete classifier induction algorithms can be adapted to the conditional Gaussian network paradigm to deal with continuous variables without discretizing them. In addition, three novel classifier induction algorithms … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
58
0
2

Year Published

2006
2006
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 108 publications
(62 citation statements)
references
References 38 publications
2
58
0
2
Order By: Relevance
“…This is a frequent situation when we can measure both the magnitude and the direction of a given phenomenon, e.g., the direction and the velocity of wind currents or the strength and orientation of a magnetic field. We study the hybrid NB classifier where the directional variable X is modeled using von Mises-Fisher distributions and the linear variable Y is modeled using The multivariate Gaussian distribution Af(fi, £) is defined by its two parameters: the mean fi and the covariance matrix E. The decision function r(Y) of a Gaussian NB [59] found by substituting this probability density function in (3) is:…”
Section: Hybrid Gaussian-von Mises-fisher Naive Bayesmentioning
confidence: 99%
See 1 more Smart Citation
“…This is a frequent situation when we can measure both the magnitude and the direction of a given phenomenon, e.g., the direction and the velocity of wind currents or the strength and orientation of a magnetic field. We study the hybrid NB classifier where the directional variable X is modeled using von Mises-Fisher distributions and the linear variable Y is modeled using The multivariate Gaussian distribution Af(fi, £) is defined by its two parameters: the mean fi and the covariance matrix E. The decision function r(Y) of a Gaussian NB [59] found by substituting this probability density function in (3) is:…”
Section: Hybrid Gaussian-von Mises-fisher Naive Bayesmentioning
confidence: 99%
“…SelNB finds the variables inducing the most accurate NB structure in a wrapper fashion. Perez et al [59] proposed a filterwrapper approach to induce SelNB classifiers. First, the filter algorithm ranks the predictive variables using the mutual information (MI) between each variable and the class.…”
Section: Selective Von Mises Naive Bayesmentioning
confidence: 99%
“…All other attributes nodes are dependent on the class attribute nodes, in addition, other attributes are mutual independence [3]. Because of its highly conditional independence assumption, the Naive Bayesian classifier makes entire classifier training, and makes the classification process simplified, and its structure fixed.…”
Section: Naive Bayesian Networkmentioning
confidence: 99%
“…An other interesting approach is the Kononenko one (Kononenko (1991)), which represents some variables in one node. As in (Perez et al (2006)) the assumption we will make is that this variable follows a normal multivariate distribution (conditionally to the class) and we will refer to this kind of BN as Condensed Semi Naïve Bayesian Network (CSNBN). …”
Section: Bayesian Networkmentioning
confidence: 99%