2018
DOI: 10.3390/e20040249
|View full text |Cite
|
Sign up to set email alerts
|

Simulation Study on the Application of the Generalized Entropy Concept in Artificial Neural Networks

Abstract: Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(33 citation statements)
references
References 32 publications
0
33
0
Order By: Relevance
“…where AUC INB stands for the In-Bag sample accuracy, AUC OOB for the Out-of-Bag sample accuracy and w obs is an observation level weight vector (please see next paragraph). Parameter α measures the weights of the first and the second term in the equation i.e., it controls what is more important during learning, stability of the tree, or small errors on the unseen dataset [54]. It should be noticed that some observations are more difficult to correctly classify than others.…”
Section: Weighted Random Forestmentioning
confidence: 99%
See 1 more Smart Citation
“…where AUC INB stands for the In-Bag sample accuracy, AUC OOB for the Out-of-Bag sample accuracy and w obs is an observation level weight vector (please see next paragraph). Parameter α measures the weights of the first and the second term in the equation i.e., it controls what is more important during learning, stability of the tree, or small errors on the unseen dataset [54]. It should be noticed that some observations are more difficult to correctly classify than others.…”
Section: Weighted Random Forestmentioning
confidence: 99%
“…The second used measure is Area Under the ROC Curve, which is particularly important in this research since it was used to tune the parameters of each model [18]. The construction of the ROC curve and the calculation of the AUC measure was described in Section 3 [52,54].…”
Section: Performance Measuresmentioning
confidence: 99%
“…f for detection of unusually large learning effort has been defined via unusually large weight increments as follows In reality, it is practically impossible to choose the best bias that determines the unusually large weight update magnitudes for proper evaluation of (4), so the detection sensitivity for unusually large weight updates was resolved via a power-law based multi-scale approach as in [34,43] and that is reviewed and modified in later sections. (19) detect the noise as the novelty immediately at its occurrence at k > 400 and then LE decreases as the large variance of learning increments becomes a new usual learning pattern (details on LE and its orders can be found in Section 4.1 and 4.2).…”
Section: Concept Of Learning Information Measurementioning
confidence: 99%
“…where E = 0 means that no learning updates of all parameters are unusually large for any sensitivity α, and E = 1 means that all learning updates of all parameters are unusually large for all sensitivities α, and where the sum is normalized for the length of vector α and for the total number of neural weights n w , and thus (19) represents an approximation of LE. Particularly in [34], it is shown that the sum of L(α) along given by formula (12) in principle correlates to the log-log plot slope H calculated by formula (16).…”
Section: Practical Algorithm For Learning Entropymentioning
confidence: 99%
See 1 more Smart Citation