2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines 2015
DOI: 10.1109/fccm.2015.50
|View full text |Cite
|
Sign up to set email alerts
|

FPGA Acceleration of Recurrent Neural Network Based Language Model

Abstract: Recurrent neural network (RNN) based language model (RNNLM) is a biologically inspired model for natural language processing. It records the historical information through additional recurrent connections and therefore is very effective in capturing semantics of sentences. However, the use of RNNLM has been greatly hindered for the high computation cost in training. This work presents an FPGA implementation framework for RNNLM training acceleration. At architectural level, we improve the parallelism of RNN tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
44
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 86 publications
(44 citation statements)
references
References 16 publications
0
44
0
Order By: Relevance
“…], [28] while maintaining stateof-the-art accuracy. Prior efforts have been made in FPGA acceleration of speech recognition, classification and language modelling using Recurrent Neural Networks [14], [16], [27]; however the challenges in generation of sequences with longterm dependencies, particularly in audio domain have not been addressed.…”
Section: Prior Work On Accelerating Dnns For Fpgasmentioning
confidence: 99%
“…], [28] while maintaining stateof-the-art accuracy. Prior efforts have been made in FPGA acceleration of speech recognition, classification and language modelling using Recurrent Neural Networks [14], [16], [27]; however the challenges in generation of sequences with longterm dependencies, particularly in audio domain have not been addressed.…”
Section: Prior Work On Accelerating Dnns For Fpgasmentioning
confidence: 99%
“…Deep learning forms of ANN like Convolutional Neural Networks (CNN) and Recurrant Neural Networks (RNN) are widely acknowledged for their compute performance and energy efficiency in classification and machine learning tasks on large datasets (typically in datacenters) [3], [4], [8]. Many ANN applications that interact with physical systems require the accuracy and dynamic range offered by floating point representations, resulting in increased complexity at each neuron.…”
Section: Related Workmentioning
confidence: 99%
“…Many ANN applications that interact with physical systems require the accuracy and dynamic range offered by floating point representations, resulting in increased complexity at each neuron. FPGAs represent an ideal platform for accelerating ANN-based systems because they enable large scale parallelism while also supporting high throughput floating point computations [3], [4], [9].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In work [20] and [21], the network are implemented and evaluated on their original form without compression. In work [9] and our work, all the networks are first compressed, so the equivalent throughput should be multiplied by its compression rates which is as same as [9].…”
mentioning
confidence: 99%