“…In addition, a recurrent neural network language model (RNNLM) [29] was also used to refine the result of the first pass decoding. The CUED-RNNLM Toolkit v1.0 [30] was used to train the RNNLM 1 The splicing indexes per layer can be described as {-1,0,1} {-1,0,1} {-1,0,1,2} {-3,0,3} {-3,0,3} {-6,-3,0} {0} using the notation of [8,11]. 2 The architecture can be described as {-2,-1,0,1,2} {-1,0,1} L {-3,0,3} {-3,0,3} L {-3,0,3} {-3,0,3} L, where L represents an LSTMP layer with 512 cells and 128-dimensional recurrent and non-recurrent projections, using notation of [8,11].…”