2022
DOI: 10.1155/2022/7829795
|View full text |Cite
|
Sign up to set email alerts
|

Comparing the Linear and Quadratic Discriminant Analysis of Diabetes Disease Classification Based on Data Multicollinearity

Abstract: Linear and quadratic discriminant analysis are two fundamental classification methods used in statistical learning. Moments (MM), maximum likelihood (ML), minimum volume ellipsoids (MVE), and t-distribution methods are used to estimate the parameter of independent variables on the multivariate normal distribution in order to classify binary dependent variables. The MM and ML methods are popular and effective methods that approximate the distribution parameter and use observed data. However, the MVE and t-distr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 27 publications
(23 reference statements)
0
2
0
Order By: Relevance
“…The initial step in the QDA algorithm is calculating the mean and covariance matrix values for each class. Next, calculate the QDA discriminant function (equation 9), so that data can be classified using equation 10 [37], where 𝛿(𝑥) is the QDA discriminant, x is the input data, ∑ 1 and ∑ 2 are the covariance matrices for class 1 and class 2, 𝜇 1 and 𝜇 2 are the average values for classes 1 and 2, and 𝜋 1 and 𝜋 2 are the prior probabilities for classes 1 and 2.…”
Section: 𝑃(𝑥mentioning
confidence: 99%
“…The initial step in the QDA algorithm is calculating the mean and covariance matrix values for each class. Next, calculate the QDA discriminant function (equation 9), so that data can be classified using equation 10 [37], where 𝛿(𝑥) is the QDA discriminant, x is the input data, ∑ 1 and ∑ 2 are the covariance matrices for class 1 and class 2, 𝜇 1 and 𝜇 2 are the average values for classes 1 and 2, and 𝜋 1 and 𝜋 2 are the prior probabilities for classes 1 and 2.…”
Section: 𝑃(𝑥mentioning
confidence: 99%
“…The following classification models were used on the MIT-BIH dataset: Non-tree based classifiers, including Logistic Regression [ 47 ], K-Nearest Neighbors (KNN) [ 48 ], Linear Discriminant Analysis (LDA) [ 49 ], and Quadratic Discriminant Analysis (QDA) [ 50 ], as well as tree-based classifiers, including Decision Trees [ 51 ], Bagging [ 52 ], Random Forest [ 53 ], Adaptive Boosting [ 54 ], Gradient Boosting [ 55 ], Light Gradient Boosting [ 56 ], and Extreme Gradient Boosting [ 57 ].…”
Section: Classifiersmentioning
confidence: 99%