2013 IEEE Workshop on Automatic Speech Recognition and Understanding 2013
DOI: 10.1109/asru.2013.6707742
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid speech recognition with Deep Bidirectional LSTM

Abstract: Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitabl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
740
0
6

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,430 publications
(792 citation statements)
references
References 15 publications
3
740
0
6
Order By: Relevance
“…To address this problem, we develop a gated recurrent neural network for document composition, which works in a sequential way and adaptively encodes sentence semantics in document representations. The approach is analogous to the recently emerged LSTM (Graves et al, 2013;Zaremba and Sutskever, 2014;Xu et al, 2015) and gated neural network (Cho et al, 2014;Chung et al, 2015). Specifically, the transition function of the gated RNN used in this work is calculated as follows.…”
Section: …… ……mentioning
confidence: 99%
“…To address this problem, we develop a gated recurrent neural network for document composition, which works in a sequential way and adaptively encodes sentence semantics in document representations. The approach is analogous to the recently emerged LSTM (Graves et al, 2013;Zaremba and Sutskever, 2014;Xu et al, 2015) and gated neural network (Cho et al, 2014;Chung et al, 2015). Specifically, the transition function of the gated RNN used in this work is calculated as follows.…”
Section: …… ……mentioning
confidence: 99%
“…The single-sequence model exploits stacked bidirectional RNNs (Bi-RNN) (Schuster and Paliwal, 1997;Graves et al, 2005Graves et al, , 2013Zhou and Xu, 2015). Figure 3 shows the overall architecture, which consists of the following three components:…”
Section: Single-sequence Modelmentioning
confidence: 99%
“…On a variety of tasks, these LMs have produced substantial gains over conventional generative models based on counting n-grams. Successes include machine translation (Devlin et al, 2014) and speech recognition (Graves et al, 2013). However, log-linear LMs come at a significant cost for computational efficiency.…”
Section: Introductionmentioning
confidence: 99%