2003
DOI: 10.1117/12.498954
|View full text |Cite
|
Sign up to set email alerts
|

On-chip training for cellular neural networks using iterative annealing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2005
2005
2011
2011

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Nevertheless, due to the straightforward structural architecture of CNN that restricts itself to local couplings, hardware realizations as integrated circuits (e.g., in CMOS technology) of such networks are already available and offer the potentiality of real-time applications for a massively parallel signal processing. Recent studies have shown that robust templates, which do not only work on CNN simulators, but also on hardware implementations can be found using an on-chip optimization (Feiden and Tetzlaff, 2003). Further research will show whether our CNN-based classification approach can be transferred onto a hardware-CNN without reducing the performance.…”
Section: Discussionmentioning
confidence: 92%
“…Nevertheless, due to the straightforward structural architecture of CNN that restricts itself to local couplings, hardware realizations as integrated circuits (e.g., in CMOS technology) of such networks are already available and offer the potentiality of real-time applications for a massively parallel signal processing. Recent studies have shown that robust templates, which do not only work on CNN simulators, but also on hardware implementations can be found using an on-chip optimization (Feiden and Tetzlaff, 2003). Further research will show whether our CNN-based classification approach can be transferred onto a hardware-CNN without reducing the performance.…”
Section: Discussionmentioning
confidence: 92%
“…Connection templates can be derived analytically for some specific applications, but network optimization (via supervised learning [98,99,138,163]) is usually required to identify proper templates.…”
Section: Bivariate Time Series Analysismentioning
confidence: 99%