2014
DOI: 10.1016/j.neucom.2013.11.010
|View full text |Cite
|
Sign up to set email alerts
|

Imbalanced evolving self-organizing learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0
2

Year Published

2014
2014
2020
2020

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(25 citation statements)
references
References 36 publications
0
23
0
2
Order By: Relevance
“…Rather than combining two different optimization methods, most of the researchers modified the original optimization method to obtain better results in the analysis [18][19][20][21]. T. Niknam [22] modified the honeybee optimization method in solving the multiobjective problem for DG allocation and sizing.…”
Section: Related Researchmentioning
confidence: 99%
“…Rather than combining two different optimization methods, most of the researchers modified the original optimization method to obtain better results in the analysis [18][19][20][21]. T. Niknam [22] modified the honeybee optimization method in solving the multiobjective problem for DG allocation and sizing.…”
Section: Related Researchmentioning
confidence: 99%
“…Table 1 gathers only dynamic features used in the proposed investigation. It should be noticed that, in practice, at least 40 dynamic features can be x-velocity f 8 Measurement time f 4 y-velocity f 9 Pen-up time f 5 x, y -velocity f 10 Pen-down time recorded by a tablet or additionally calculated [20]. For example, acceleration in both the x and y directions can be computed on the basis of velocity and time measures.…”
Section: The Dynamic Signature Featuresmentioning
confidence: 99%
“…To accomplish this task the well known Dynamic Time Warping (DTW) technique was Table 2: List of similarity measures or coefficients [20] Coeff./measure Coefficient or similarity measure name Coeff./measure Coefficient or similarity measure name ω 1 Euclidean ω 10 Jaccard ω 2 Gower ω 11 Fidelity ω 3 Minkowski ω 12 Bhattacharyya ω 4 City Block ω 13 Hellinger ω 5 Cosine ω 14 Matusita ω 6 Kulczynski ω 15 Pearsonχ 2 ω 7…”
Section: Similarity Coefficientsmentioning
confidence: 99%
“…Cai et al [9] proposed a hybrid learning model using a modified selforganizing maps algorithm. This method assigns a winner neuron based on an energy function minimizing local error in the competitive learning phase.…”
Section: Sentiment and Emotion Classificationmentioning
confidence: 99%