2021
DOI: 10.48550/arxiv.2103.05081
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Parallelizable Lattice Rescoring Strategy with Neural Language Models

Abstract: This paper proposes a parallel computation strategy and a posteriorbased lattice expansion algorithm for efficient lattice rescoring with neural language models (LMs) for automatic speech recognition. First, lattices from first-pass decoding are expanded by the proposed posterior-based lattice expansion algorithm. Second, each expanded lattice is converted into a minimal list of hypotheses that covers every arc. Each hypothesis is constrained to be the best path for at least one arc it includes. For each latti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Neural LMs are commonly used in second-pass rescoring [1,2,22,23] or first-pass decoding [3] in ASR systems. While for conventional research-oriented datasets like Switchboard the word-level vocabulary size is several dozens of thousands, for larger systems, especially commercially available systems, the vocabulary size can often go up to several hundred thousand.…”
Section: Related Workmentioning
confidence: 99%
“…Neural LMs are commonly used in second-pass rescoring [1,2,22,23] or first-pass decoding [3] in ASR systems. While for conventional research-oriented datasets like Switchboard the word-level vocabulary size is several dozens of thousands, for larger systems, especially commercially available systems, the vocabulary size can often go up to several hundred thousand.…”
Section: Related Workmentioning
confidence: 99%
“…This flaw of N-best rescoring is mitigated when using rescoring approaches dealing with word lattices. There are approaches such as pruned RNNLM lattice rescoring [16], fast N-best rescoring [17] and parallelizable lattice rescoring [18] which exploit the similarity of hypotheses to optimize computations. Nevertheless, these rescoring approaches are rather slow and computationally expensive as they require multiple LM calls.…”
Section: Introductionmentioning
confidence: 99%