“…ML algorithms have been widely employed in IoT data analytics applications to analyze IoT data and make decisions [28]. This Section discusses the commonly used ML algorithms for IoT data analytics in detail.…”
Section: Model Learningmentioning
confidence: 99%
“…K-Nearest Neighbors (KNN) is a basic ML algorithm that can be used to solve both classification and regression problems [28]. KNNs identify the nearest k data points to each test sample in order to estimate its value or category.…”
Section: K-nearest Neighbors (Knn)mentioning
confidence: 99%
“…KNNs identify the nearest k data points to each test sample in order to estimate its value or category. The average distances between a test sample and its neighbor samples are calculated using a distance metric, such as Euclidean or Mahalanobis distance [28]. The majority label or the average observation value of nearby samples will be assigned to each test sample [29].…”
Section: K-nearest Neighbors (Knn)mentioning
confidence: 99%
“…NB is highly interpretable and computationally efficient. Additionally, when compared to other ML algorithms, the primary benefit of NB is that it does not need a large number of training samples [28]. However, a major limitation of NB is that it requires prior knowledge to calculate Bayesian probabilities and make predictions.…”
Section: Naïve Bayes (Nb)mentioning
confidence: 99%
“…SVM is a powerful algorithm that is capable of handling nonlinear and high-dimensional data with effective regularization and generalization. It is also excessively efficient in terms of memory consumption [28]. One significant drawback of SVM is that it does not use explicit probability estimations, making it challenging to interpret the model.…”
“…ML algorithms have been widely employed in IoT data analytics applications to analyze IoT data and make decisions [28]. This Section discusses the commonly used ML algorithms for IoT data analytics in detail.…”
Section: Model Learningmentioning
confidence: 99%
“…K-Nearest Neighbors (KNN) is a basic ML algorithm that can be used to solve both classification and regression problems [28]. KNNs identify the nearest k data points to each test sample in order to estimate its value or category.…”
Section: K-nearest Neighbors (Knn)mentioning
confidence: 99%
“…KNNs identify the nearest k data points to each test sample in order to estimate its value or category. The average distances between a test sample and its neighbor samples are calculated using a distance metric, such as Euclidean or Mahalanobis distance [28]. The majority label or the average observation value of nearby samples will be assigned to each test sample [29].…”
Section: K-nearest Neighbors (Knn)mentioning
confidence: 99%
“…NB is highly interpretable and computationally efficient. Additionally, when compared to other ML algorithms, the primary benefit of NB is that it does not need a large number of training samples [28]. However, a major limitation of NB is that it requires prior knowledge to calculate Bayesian probabilities and make predictions.…”
Section: Naïve Bayes (Nb)mentioning
confidence: 99%
“…SVM is a powerful algorithm that is capable of handling nonlinear and high-dimensional data with effective regularization and generalization. It is also excessively efficient in terms of memory consumption [28]. One significant drawback of SVM is that it does not use explicit probability estimations, making it challenging to interpret the model.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.