2017
DOI: 10.1016/j.eswa.2017.04.025
|View full text |Cite
|
Sign up to set email alerts
|

On the comparison of random and Hebbian weights for the training of single-hidden layer feedforward neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…where η is the learning rate, x i is an m-dimensional input vector (input neuron), and y i � w T i x i y i � x T i w i (output Mathematical Problems in Engineering neuron) is the output. e new and old synapse weights are w i+1 and w i , respectively, and the weight change is given as Δw [41].…”
Section: Training Framework and Procedurementioning
confidence: 99%
“…where η is the learning rate, x i is an m-dimensional input vector (input neuron), and y i � w T i x i y i � x T i w i (output Mathematical Problems in Engineering neuron) is the output. e new and old synapse weights are w i+1 and w i , respectively, and the weight change is given as Δw [41].…”
Section: Training Framework and Procedurementioning
confidence: 99%
“…Rosanna et al [15] used the metric between distributions, the l2 Wasserstein distance to proposed a PCA-based method for distributional-valued data. Yining and Joe [16] proposed a dynamic inner PCA framework for the modeling of dynamic data through the maximization of the covariance between a component and the prediction based on its previous values. In this method, a dynamic latent variable model is first extracted to portray the most auto-covarying dynamics in a given dataset.…”
Section: Literature Surveymentioning
confidence: 99%
“…It is primarily applied in PCA. From the computational perspective, the GHA [16,17] is beneficial because it can solve eigenvalue problems using iterative methods which requires no direct covariance matrix computation. This is more significant when there are many attributes in a given set [18,19].…”
Section: Generalized Hebbian Algorithm (Gha)mentioning
confidence: 99%
See 1 more Smart Citation