2009
DOI: 10.1007/s00521-009-0296-5
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of an LVQ neural network with a variable size: algorithmic specification, architectural exploration and optimized implementation on FPGA devices

Abstract: To appearInternational audienc

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
14
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 9 publications
1
14
0
Order By: Relevance
“…The proposed work improves the hardware performance of some related works [10, 12, 13, 26, 27]. At first, we compared the attained performances for the same application (23 × 25) LVQ NN.…”
Section: Comparison To Related Workmentioning
confidence: 95%
See 3 more Smart Citations
“…The proposed work improves the hardware performance of some related works [10, 12, 13, 26, 27]. At first, we compared the attained performances for the same application (23 × 25) LVQ NN.…”
Section: Comparison To Related Workmentioning
confidence: 95%
“…Therefore, to improve our adopted architecture latency, we can later partition the algorithm into two parts: one to be implemented in software and another one in the hardware. The proposed work improves the hardware performance of some related works [10,12,13,26,27]. At first, we compared the attained performances for the same application (23 × 25) LVQ NN.…”
Section: Comparison To Related Workmentioning
confidence: 97%
See 2 more Smart Citations
“…al. 20 presented the implemented of LVQ (Learning Vector Quantization) neural network on FPGA. Savich et.…”
Section: Introductionmentioning
confidence: 99%