2017 Artificial Intelligence and Signal Processing Conference (AISP) 2017
DOI: 10.1109/aisp.2017.8324083
|View full text |Cite
|
Sign up to set email alerts
|

A non-parametric mixture of Gaussian naive Bayes classifiers based on local independent features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
54
0
6

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 137 publications
(60 citation statements)
references
References 9 publications
0
54
0
6
Order By: Relevance
“…This probabilistic classifier presents strong independence assumptions between the variables/features given the class. Furthermore, this model embraces the assumption that the data follow the Gaussian distribution [36,37];  Random forest (RF) belongs in the ensemble learning methods and is based on decision trees. This model constructs a large number of decision trees.…”
Section: Algorithm 1 Pseudoalgorithm Of the Clustering Processmentioning
confidence: 99%
“…This probabilistic classifier presents strong independence assumptions between the variables/features given the class. Furthermore, this model embraces the assumption that the data follow the Gaussian distribution [36,37];  Random forest (RF) belongs in the ensemble learning methods and is based on decision trees. This model constructs a large number of decision trees.…”
Section: Algorithm 1 Pseudoalgorithm Of the Clustering Processmentioning
confidence: 99%
“…Naïve Bayes' algorithm; which gives the probability of the prediction, in knowledge of previous events [77][78][79][80][81][82]. Clustering is always using mathematics; we will group the data into packets so that in each packet the data is as close as possible to each other [83][84][85]. It is used in particular in recommendations for films close to the films we have already seen!…”
Section: Machine Learningmentioning
confidence: 99%
“…In addition to linear classification, it is possible to use SVM to effectively perform nonlinear classification using the kernel method, implicitly mapping their inputs into a multi-dimensional feature space. NBs belong to a probabilistic family of classifiers [32]. They are centered on the theorem of Bayes, which is based on the assumptions of independence among features.…”
Section: Background On Machine Learningmentioning
confidence: 99%