[Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing 1992
DOI: 10.1109/icassp.1992.225987
|View full text |Cite
|
Sign up to set email alerts
|

New uses for the N-Best sentence hypotheses within the BYBLOS speech recognition system

Abstract: We describe four different ways in which we use the N-Best paradigm within the BYBLOS system. The most obvious use is for the efficient integration of speech recognition with a linguistic natural language understanding module. However, we have extended this principle to several other acoustic knowledge sources. We also describe a simple and efficient means for investigating and incorporating arbitrary new knowledge sources. The N-Best hypotheses are used to provide close altematives for discriminative training… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0
1

Year Published

1993
1993
2013
2013

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 36 publications
(17 citation statements)
references
References 10 publications
0
16
0
1
Order By: Relevance
“…However, if the language-understanding component is to provide an additional knowledge source to help in choosing the "right answer," it must have access to multiple hypotheses from the recognizer. The N-best interface (21,22)e has proved to be a convenient vehicle for such experimentation: it makes it easy to interface the recognition and understanding components, it requires no change to either component, and it permits off-line exploration of a large search space. Also, there has recently been renewed interest in word networks as a compact representation of a large set of recognition hypotheses (23,24).…”
Section: Interfacing Speech and Languagementioning
confidence: 99%
“…However, if the language-understanding component is to provide an additional knowledge source to help in choosing the "right answer," it must have access to multiple hypotheses from the recognizer. The N-best interface (21,22)e has proved to be a convenient vehicle for such experimentation: it makes it easy to interface the recognition and understanding components, it requires no change to either component, and it permits off-line exploration of a large search space. Also, there has recently been renewed interest in word networks as a compact representation of a large set of recognition hypotheses (23,24).…”
Section: Interfacing Speech and Languagementioning
confidence: 99%
“…The top choice in the final list constitutes the speech recognition results reported below. This N-best strategy [3,4] permits the use of otherwise computationally prohibitive models by greatly reducing the search space to a few (N=20-100) word sequences. It has enabled us to use cross-word-bonndary triphone models and trigram language models with ease.…”
Section: Byblosmentioning
confidence: 99%
“…These weights were optimized on the development test set automaticaUy using the N-best lists [4]. Optimization of these weights reduced the word error by 0.4%.…”
Section: Weight Optimi~ationmentioning
confidence: 99%