2000
DOI: 10.1109/5326.868447
|View full text |Cite
|
Sign up to set email alerts
|

Parallel system design for time-delay neural networks

Abstract: Abstract-In this paper, we develop a parallel structure for the time-delay neural network used in some speech recognition applications. The effectiveness of the design is illustrated by 1) extracting a window computing model from the time-delay neural systems; 2) building its pipelined architecture with parallel or serial processing stages; and 3) applying this parallel window computing to some typical speech recognition systems. An analysis of the complexity of the proposed design shows a greatly reduced comp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2002
2002
2010
2010

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…As it is shown in [22], a computational complexity R = R(N) is a number of floatpoint operations of addition/multiplication, where N is a number of an elements of input data vector. For one input data element N 1 a computational complexity of recurrent NN training algorithm [22] is equal: (i) R, = 1741 operations is the computational complexity of recurrent NN output value calculation according to (1-2): (ii) R,, = 124 operations is the computational complexity of sum-squared error calculation (3)(4) and the adaptive learning rate for neurons of output (5) and hidden (6) layers; (iii) R,,, = 1793 operations is the computational complexity of the synapses and the thresholds modification for all layers according to (7)(8)(9)(10)(11)(12)) on the stage of back information processing.…”
Section: Fine-grain Parallelization Of Recurrent Nn Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…As it is shown in [22], a computational complexity R = R(N) is a number of floatpoint operations of addition/multiplication, where N is a number of an elements of input data vector. For one input data element N 1 a computational complexity of recurrent NN training algorithm [22] is equal: (i) R, = 1741 operations is the computational complexity of recurrent NN output value calculation according to (1-2): (ii) R,, = 124 operations is the computational complexity of sum-squared error calculation (3)(4) and the adaptive learning rate for neurons of output (5) and hidden (6) layers; (iii) R,,, = 1793 operations is the computational complexity of the synapses and the thresholds modification for all layers according to (7)(8)(9)(10)(11)(12)) on the stage of back information processing.…”
Section: Fine-grain Parallelization Of Recurrent Nn Trainingmentioning
confidence: 99%
“…However coarse-grain methods practically could not be used for parallelization of single NN module, therefore development and investigation of fine-grain parallelization approaches are an urgent tasks now. As a rule, the well known solutions of fine-grain parallelization are based on development of specialized parallel and transputer hardware or hardware/software architectures [11][12][13][14][15] [16][17][18][19], as defacto modern architectures of parallel and distributed computing, practically make unreal an application of specialized hardware/software solutions to solve this task.…”
Section: Introductionmentioning
confidence: 99%
“…This problem can be overcome either by devising faster learning algorithms or by implementing the existing algorithms on parallel computers. Given the parallel nature of neural network, many researches have endeavored to parallelize different neural networks on various computer systems [2,7,13,16,22,23]. However, there are still several bottlenecks for mapping neural networks onto parallel systems.…”
Section: Introductionmentioning
confidence: 99%