2013
DOI: 10.1007/s00521-013-1412-0
|View full text |Cite
|
Sign up to set email alerts
|

A winner-take-all Lotka–Volterra recurrent neural network with only one winner in each row and each column

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…In winner-takes-all, the cluster center closest to the current image hidden layer activity is chosen as the Layer 2 sample among the cluster centers from computed class hypotheses. Accordingly, only one cluster wins to be fed-back into the hidden layer activity (see [71] for the effect of winner-takes-all behavior in neural networks). In average scheme, the cluster centers from different hypotheses are averaged and fed back into the hidden Layer 2 activity.…”
Section: Number Of Hypotheses and Competitionmentioning
confidence: 99%
“…In winner-takes-all, the cluster center closest to the current image hidden layer activity is chosen as the Layer 2 sample among the cluster centers from computed class hypotheses. Accordingly, only one cluster wins to be fed-back into the hidden layer activity (see [71] for the effect of winner-takes-all behavior in neural networks). In average scheme, the cluster centers from different hypotheses are averaged and fed back into the hidden Layer 2 activity.…”
Section: Number Of Hypotheses and Competitionmentioning
confidence: 99%