2020
DOI: 10.1080/09720510.2019.1696924
|View full text |Cite
|
Sign up to set email alerts
|

An empirical study on performances of multilayer perceptron, logistic regression, ANFIS, KNN and bagging CART

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…This ML algorithm is a feed-forward model [58], in which learning is through two phases: forward and backward [59]. Given the loss function E(x, y, θ), with x the inputs, θ parameters, y the target, and w ij h as the weight of the connection from neuron i of layer h-1 to neuron j, the MLP algorithm uses gradient descent theory or its variants to adjust weights by applying partial derivation of the loss function regard to each parameter [60] to get a new weight, as follows:…”
Section: Yearmentioning
confidence: 99%
“…This ML algorithm is a feed-forward model [58], in which learning is through two phases: forward and backward [59]. Given the loss function E(x, y, θ), with x the inputs, θ parameters, y the target, and w ij h as the weight of the connection from neuron i of layer h-1 to neuron j, the MLP algorithm uses gradient descent theory or its variants to adjust weights by applying partial derivation of the loss function regard to each parameter [60] to get a new weight, as follows:…”
Section: Yearmentioning
confidence: 99%
“…MLP is a feedforward model [35] which learns in two phases: forward and backward [36]. Given the loss function E(x, y, θ), with x the inputs, θ parameters, y the target, and w h ij as the weight of the connection from neuron i of layer h − 1 to neuron j, the MLP algorithm uses gradient descent theory or its variants to adjust weights by applying partial derivation of the loss function for each parameter [37] to get the new weight as follows:…”
Section: Deep Neural Networkmentioning
confidence: 99%
“…It is a powerful ensemble algorithm that works well with small datasets. Bagging is used to reduce the variance in a trained model and prevents overfitting of data while training [45]. In this algorithm, the dataset is split into clusters.…”
Section: Logitboostmentioning
confidence: 99%