1997 IEEE International Conference on Acoustics, Speech, and Signal Processing
DOI: 10.1109/icassp.1997.596078
|View full text |Cite
|
Sign up to set email alerts
|

Neural-network based measures of confidence for word recognition

Abstract: WORD CORRECTNESSThis paper proposes a probabilistic framework t o define and evaluate confidence measures for word recognition. We describe a novel method to combine different knowledge sources and estimate the confidence in a word hypothesis, via a neural network. We also propose a measure of the joint performance of the recognition and confidence systems. The definitions and algorithms are illustrated with results on the Switchboard Corpus.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 89 publications
(55 citation statements)
references
References 10 publications
0
55
0
Order By: Relevance
“…The most popular paradigm to evaluate confidence is as a probability, [6,41,46,65,49,66,51,9,53,12,30,61]. Thus, each of these papers models confidence in a fashion such that confidence to conform to the axioms of probability.…”
Section: Confidence Paradigmsmentioning
confidence: 99%
See 2 more Smart Citations
“…The most popular paradigm to evaluate confidence is as a probability, [6,41,46,65,49,66,51,9,53,12,30,61]. Thus, each of these papers models confidence in a fashion such that confidence to conform to the axioms of probability.…”
Section: Confidence Paradigmsmentioning
confidence: 99%
“…However, it is also useful to have a way to measure how confident the system is on any given indication. Much of the current literature, [6,9,12,41,65,66], uses either a posterior probability or something similar to a posterior probability to measure confidence in a given declaration. However, there are problems with simply using posterior probabilities.…”
Section: Literature Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, these approaches do not propose to train the model so as to maximize the spotting performance, and the keyword spotting task is only introduced in the inference step after training. Only few studies have proposed discriminative parameter training approaches to circumvent this weakness (Benayed et al 2003;Sandness and Hetherington 2000;Sukkar et al 1996;Weintraub et al 1997). Sukkar et al (1996) proposed to maximize the likelihood ratio between the keyword and garbage models for keyword utterances and to minimize it over a set of false alarms generated by a first keyword spotter.…”
Section: Previous Workmentioning
confidence: 99%
“…Other discriminative approaches have been focused on combining different HMM-based keyword detectors. For instance, Weintraub et al (1997) trained a neural network to combine likelihood ratios from different models. Benayed et al (2003) relied on support vector machines to combine different averages of phone-level likelihoods.…”
Section: Previous Workmentioning
confidence: 99%