2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT) 2021
DOI: 10.1109/icccnt51525.2021.9580151
|View full text |Cite
|
Sign up to set email alerts
|

Parkinson disease prediction using feature selection technique in machine learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…While effective, wrapper methods can be computationally expensive due to repeated model training. 20 , 21 , 22 , 23 Embedded methods seamlessly integrate feature selection into the model training process, selecting features based on their relevance to model performance. Techniques like Lasso regression and decision trees employ embedded feature selection, offering computational efficiency well-suited for larger datasets.…”
Section: Methodsmentioning
confidence: 99%
“…While effective, wrapper methods can be computationally expensive due to repeated model training. 20 , 21 , 22 , 23 Embedded methods seamlessly integrate feature selection into the model training process, selecting features based on their relevance to model performance. Techniques like Lasso regression and decision trees employ embedded feature selection, offering computational efficiency well-suited for larger datasets.…”
Section: Methodsmentioning
confidence: 99%
“…Because the classifier operates under the strong features independence assumption, it is termed naive. There are several variations of NB found in the literature [19], with the key distinction being how the probability of the target class is calculated. These variations include simple Naive Bayes, Gaussian Naive Bayes (was used here in this study), Multinomial Naive Bayes, Bernoulli Naive Bayes, and Multi-variant Poisson Naive Bayes.…”
Section: ) Naive Bayesmentioning
confidence: 99%
“…The classifier is referred to as naive because it operates under the strong features independence assumption. The key distinction between the various versions of NB found in the literature [11] is, how the likelihood of the intended class is computed. Simple Naive Bayes, Gaussian Naive Bayes (which was used in this study), Multinomial Naive Bayes, Bernoulli Naive Bayes, and Multi-variant Poisson Naive Bayes are some of these variations.…”
Section: 12naïve Bayesmentioning
confidence: 99%